Hyper-Convergence: a paradigm shift, or an inevitable evolution?

With the recent article on CRN about HP considering the acquisition of Simplivity, http://www.crn.com/news/data-center/300073066/sources-hewlett-packard-in-talks-to-acquire-hyper-converged-infrastructure-startup-simplivity.htm, it seems a good time to look at what simplivity does, and why they are an attractive acquisition target.

In order to answer both questions, we need to look at history. First was the mainframe. That was great, but inflexible, so we moved to distributed computing. This was substantially better, and brought us into the new way of computing, but there was a great deal of waste. Virtualization came along and enabled us to get higher utilization rates on our systems, but this required an incredible amount of design work up front, and it allowed the siloed IT department to proliferate since it did not force anyone to learn a skillset outside their own particular area of expertice. This lead us to converged infrastructure, a concept that if you could get everything from a single vendor, or support from a single vendor at the very least. Finally came the converged system, it provided a single vendor/support solution, packaged as one system, and we used it to grow the infrastructure based on performance or capacity. It was largely inflexible, by design, but it was simple to scale, and predictible.

To solve this problem, companies started working on the concept of Hyper-Convergence. Basically there were smaller discrete converged systems, many of which created high availability zones not through redundant hardware in each node, but through clustering. The software lived on each discrete converged node, and it was good. Compute, Network, and Storage, all scaling out in pre-defined small discrete nodes, enabling capacity planning, and fewer IT administrators to manage larger environments. Truly Software Defined Data Center, but at a scale that could start small and grow organically.

Why then is this interesting for a company like HP? As always I am not an insider, I have no information that is not public, I am engaging in speculation, based on what I am seeing in the industry. Looking at HP’s Converged Systems strategy, looking at what the market is doing, I believe that in the near future, the larger players in this space will look to converged systems as the way to sell. Hyper-convergence is a way forward to address the market space that is either too small, or needing something that traditional converged systems cannot provide. Hyper-convergence can provide a complimentary product to existing converged systems, and will round out solutions in many datacenters.

Hyper-Convergence is a part of the inevitable evolution of technology, whether HP ends up purchasing simplivity, these types of conversations show that such concepts are starting to pickup steam. It is time for companies to innovate or die, and this is a perfect opportunity for an already great company to keep moving forward.

Advertisements
Hyper-Convergence: a paradigm shift, or an inevitable evolution?

Defining the cloud Part 3: Open

Open

This may seem like an odd topic for the cloud, but I think it is important.  One of the questions I have been asked many times when discussing cloud solutions with customers is around portability of virtual machines, and interoperability with other providers.  This of course raises some obvious concerns for companies who want to make money building or providing cloud services and platforms.

We live in a soundbite culture.  If it can’t be said in 140 characters or less, we don’t really want to read it.  Hopefully you are still reading at this point, this is way past a tweet.  We like monthly services versus owning a datacenter, who wants to pay for the equipment when you can just rent it in the cloud.  More and more services are popping up to make it simpler for us to rent houses for a day or a few days, get a taxi, rent a car by the mile, or a bike by the hour.  There is nothing wrong with this, but we need to understand the impact.  What if each car had different controls to steer, what if there was no standard?  How could the providers then create services, it is all based on an open and agreed upon standard.

In order for the cloud to be truly useful, it must be based on standards.  This is where OpenStack is the most important.  Going far beyond just a set of API’s, OpenStack enables us to have a core set of features that are common to everyone.  Of course in order to make money, beyond just selling support for this, many companies choose to add additional features which differentiate them.  This is not opensource, but still based on the open framework.  For most companies, this still uses open standards such as the rest API, and other standards based ways of consuming the service.  Even VMware, perhaps the largest cloud software provider, uses standard API’s, and supports popular tools for managing their systems.

Open standards, open API’s, and standards based management features are critical for the cloud.  Of course everyone wants you to choose their cloud, but to be honest, most of us consume multiple cloud services at once.  I use DropBox, Box.Net, Google Drive, Skydrive, and a few other cloud storage providers because they all have different use cases for me.  I use Netflix and Hulu Plus because they give me different content.  Why then should business consumers not use some AWS, some Google Enterprise Cloud, some HP Public Cloud, and perhaps even some of the other smaller providers?  For the cloud to continue to be of value, we will have to adjust to the multi service provider cloud, and everyone will have to compete on the best services, the best features, and the best value.

Aside

Defining the cloud Part 2: Self Service

In Defining the cloud Part 1: Applications, we discussed how applications are the reason for the cloud, and how we move abstraction from the servers to the applications.  Moving forward we now look at how the cloud should enable users to provision their own systems within given parameters.

Self Service

In the early days of virtualization, we were very excited because we were making IT departments more efficient. I remember IT managers actually telling young server admins to stall when creating virtual servers to prevent users from realizing how quickly it could be done. What took the IT department hours, weeks, or months previously, now was done with the press of a button, and a few minutes, assuming proper capacity planning.

IT is often seen as a cost center. For years now we have been preaching the gospel of IT As A Service, basically the concept that technology becomes a utility to the business. Nicholas Carr championed this concept in his book, The Big Switch. Basically he popularized the concept that much like electricity, technology was becoming something that should just work. IT is no longer just for those of us who understand it, but rather it becomes a tool that anyone can use just like flipping a switch to turn on a light, or turning on the TV.

So how do we make this happen? It is as simple as looking at the smart phone you have in front of you or in your pocket.  The thing that makes your phone so great is not the brand, not the operating system, not even the interface, the most important thing is the application ecosystem.  I can go on my phone and grab an app to do just about anything.  I don’t need to know how the system works, I just go grab apps and don’t really think about how they interact with the phone.

Imagine giving this to to our end users, simply give them an catalog to say what they need, a user wants to build an application, so they go to a catalog select from a pre-defined template, and the rest is handled by the system.  No IT intervention, no human interaction required, just a few simple clicks, almost like grabbing an app on a phone.  Their entire virtual infrastructure is built out for them and they are notified when it is complete.

So what does this all have to do with HP?  Stick with me on this, this is the future, this is HP Helion, and this is amazing.

Defining the cloud Part 2: Self Service

Software Defined Storage Replication

In a conversation recently with a colleague, we were discussing storage replication in a VMware environment.  Basically the customer in question had bought a competitors array, brand X, and wanted to see if there was a way to replicate to one of our arrays at a lower price point.

This is a fairly common question coming from customers, more so in the SMB space, but with the increasing popularity of Software Defined Storage, customers want options, they don’t want to be locked into a single vendor solution.  In an openstack environment, high availability is handled at the application level, and I have to say I strongly recommend this as a policy for all new applications, however how do we handle legacy apps in the interim?

In a traditional storage array, we typically do replication at the storage level.  VMware Site Recovery Manager allows us to automate the replication and recovery process integrating with the storage replication, and in smaller environments, can even handle replication at the vSphere host level. Array based replication is generally considered the most effecient, and the most recoverable. This does require similar arrays from the same vendor, with replication licensing. In a virtual environment this looks something like the picture below.

Storage_Replication_ArrayBased

This works well, but can be costly and leads to storage vendor lockin, not a bad thing if you are a storage vendor, but not always the best solution from a consumer perspective. So how do we abstract the replication from the storage? Remember, one of the purposes of virtualization and openstack is to abstract as much as possible from the hardware layer. That is not to mean hardware is not important, quite the contrary, but it does enable us to become more flexible.

So to provide this abstraction there are a couple options. We can always rewrite the application, but that takes time, we can do replication at the file system level or similarly using a 3rd party software to move data, but in order to really abstract the replication from the hardware/software we need to insert something in the middle.

In the conversation I was having at the begining, the goal was to replicate from the production datacenter running brand X storage to a remote location using an HP storage product. To accomplish this, we discussed using vSphere replication, something I will discuss in a future post, we discussed host based replication, but that is not as seamless, and what we settled on is below. Not the most elegant solution, but something that helps us abstract the replication layer. Essentially using the HP StoreVirtual VSA, since it has replication built in, we put that in front of the brand X storage, and then on the other side we can put another VSA on a server with some large hard drives, and voila, replication and DR storage handled.

Storage_Replication_VSA - Edited

Not the most elegant solution, but it is a way to abstract the replication from the storage, and to do so at a reasonable cost. The advantage to this solution is that we have also given ourselves DR storage. Next I will explore vSphere replication, but as always I want to point out, this solution minimized vendor lock in on the hypervisor and storage levels.

Software Defined Storage Replication

Reference Architectures, Hardware Compatibility Lists, and you.

Recently I was giving a presentation on designing storage for VMware Horizon.  I was referencing the HP Client Virtualization SMB Reference Architecture for VMware View, based on an earlier version, but still valid.  The conversation kept coming back to well can’t I do more than that, or why wouldn’t I just do it this way.

One of the better hardware compatibility lists is actually the VMware Compatibility Guide.  The best feature is that it is simple to understand, searchable, and matrixed.  This is a critical tool because it enables us to know what has been tested and what works, but more importantly what can be supported.  Of course it is often more expensive to go with supported configurations, but if we are looking at cost as the primary criteria, it would make more sense to use open source technologies.  While I am a big fan of open source for labs and various projects, the cost of supporting these in a production environment is often far more than simply using supported configuration and paying for support.  This is also true for using commodity hardware which is not supported.

The same can be said of reference architectures.  HP does an excellent job of creating these, especially because they have hardware in all major categories.  In the example I started with, the major issue was that the questions were around cost.  The person creating the design wanted to know why the can’t remove parts or replace them for cheaper ones.  The short answer is simply that the reference architecture is tested with all the components it contains.  It is a known quantity so it will work, and if it doesn’t the support teams can fix it since they know all the pieces.

So to sum up, doing things the way the manufacturer recommends will save a great deal of heartache.  To answer the question, you can do things your own way, but you may find that it is more trouble to support than it is worth.

Reference Architectures, Hardware Compatibility Lists, and you.

The changing role of shared storage in the Software Defined Datacenter: Part 3

As we have discussed, the role of shared storage is changing.  VMware has supported vMotion without shared storage for a while now, software defined storage is enabling shared compute and storage virtualization, and for the past year or so, we have been hearing more about the concept of vVols.  I am certainly not the first to talk about this, there are a number of blogs on this, my personal favorite being The future of VMware storage – vVol demo by @hpstorageguy.

As always, in the interests of full disclosure, I do work for HP, but this is my personal blog, and I write about things I think are interesting.  I am not going into great detail on how vVol’s work, but I do want to show a few diagrams to differentiate current architecture from what we MAY see in the future.

So looking at the current and legacy architecture of VMware storage, we typically present storage to all hosts in the cluster in the form of a shared LUN or Volume.  This is very simple, the VMware admin asks the storage admin for a number of volumes of a specific size, in our example below, let’s say they are 2TB volumes and they request 2 of them.  The VMware administrator then creates datastores, which formats them with the VMFS file system and allows virtual machines to be created within it.  Of course this whole process can be done through the VMware GUI using the vSphere storage API’s, but the net effect is the same.  We still create another layer in the storage stack which is not the most efficient way of handling this.

Traditional_VMware_Storage

 

vVols are VMwares new way of handling storage which resolves this problem in a rather unique way.  Currently we can bypass the datastore concept and do a raw disk map or RDM, which allows us to present a raw disk device to the virtual machine itself.  Unfortunately this does not give us a measurable difference in performance, and can become tedious to manage.  vVols on the other hand, appear to be datastores, but really pass through the individual volumes to the individual VM’s.  In the drawing below, the individual volumes appear to the VM administrator as Datastores, but they are broken out on the storage array.  This removes the performance layer, and enables a more policy based storage interface for the VMware administrator.  This is critical to note, policy based storage at a VMware level.  This brings us closer to self service in a virtualized environment.  I don’t yet have a full handle on how this will be managed, but I think it is safe to say the storage administrator will create a container giving the VMware admin a specific amount of storage with specific characteristics.  In the case of our example, 2TB containers.

 

vVols_Storage

 

Note above the volumes are of varying sizes, but what is not shown is the volumes or luns are individual disks presented directly to the virtual machine itself.  This is important to remember since we are offloading the performance of each individual disk presented to the virtual machine to the storage array, but we are still able to manage it as a datastore or a container on the VMware side.

Coming back to the policy based storage thought, this is not dissimilar to how the HP 3Par storage operates, volumes within common provisioning groups which are containers.  The policy is set on the container in both cases, so it isn’t a stretch to see how this will work well together.  Again I don’t have any inside information, but if you look at the post from referenced above, Calvin does an excellent job of showing us  what is coming.  This, combined with VMware’s VSAN announcements recently, seem to show that there is going to be a role for the traditional storage array in addition to software defined storage in the software defined datacenter at least for now.

The changing role of shared storage in the Software Defined Datacenter: Part 3

Converged Systems – More than the sum of its parts

In the interests of full disclosure, I work for Hewlett-Packard, so my this is my semi-unbiased opinion, but I do spend my days talking about HP products, so I am not completely independent, but this has less to do with product, and more to do with concepts and standards.

Over the past few years, we have seen a number of vendors releasing converged systems, pods, blocks, and other types of systems which are essentially designed to simplify the ordering, provisioning, and support processes.  In my job I am fortunate enough to speak with many smart people, as I was discussing this trend with technical sales person, they asked me why would anyone buy a system this way when it would be cheaper to purchase the components and build it like we always have.  Why would anyone want to pay more to get one of these systems?

To get the answer we really need to determine what the motives are.  I posted previously about converged infrastructure, and I do tend to talk about the cloud, automation, and the need for a new, more efficient way of deploying infrastructure.  The best part about statistics is that they can always make your point for you, but having worked in IT for over 20 years in many roles, I believe it is safe to say IT typically spends anywhere from 70-80% of their time on operations.  That is just keeping the lights on.  To put that into terms of $$’s, that means if my IT budget, excluding salary, is $1m, I am spending $700k-800k on keeping the lights on.  That also means out of a 40 hour work week, yeah right, they are spending between 28-32 hours on support, and basic operational tasks, not making the business productive, implementing new projects, or planning for future.  This lack of time for innovation creates delays when new projects need to be done, and is counter productive.  To solve this, you either wait, hire more people, or bring in temporary help in the form of vendors or consultants to do some work for you.  If you do bring in the vendors though, or even a consultant, they will often stand up their components, but it is rare to find one who builds you out a solution and hands it to you ready to install your applications.

One of the values of a converged system, no matter who it comes from, is having a complete system.  I like the analogy of my computer.  I am pretty technical, and I love tinkering.  I used to love building Gaming PC’s.  I would spend months planning and designing, order the parts, and then spend hours painstakingly assembling the system.  Now I purchase the computer I want, and when it is too slow I purchase something faster.  I know when it arrives I will install apps, I might even put on a new operating system, but other than memory or hard drive upgrades, I typically don’t mess with the hardware.  It is just not worth the time, besides, with some of the newer systems, I can do much more using VMware workstation, or using Crouton on my Chromebook, so I can run virtual systems or different environments on a single system.  The concept behind the converged system is that unlike a reference architecture, you aren’t building from parts, but rather purchasing the whole system along with the services to stand it up.  Then, much like a shiny new computer, it is handed to you all ready to go, just add your applications.  For a majority of systems, the hypervisor is already in place,  with VMware still preferred by many.

There are many other benefits to explore, but the key is to remember, sometimes there are other considerations outside the cost of hardware, sometimes you have to consider what the opportunity cost of building your own systems can be.

Converged Systems – More than the sum of its parts