Who moved my VMware C# Client?

Years ago I was handed a rack of HP servers, a small EMC storage array, and a few CDs with something called ESX 2 on them. I was told I could use this software to put several virtual servers on the handful of physical servers I had available to me. There was a limited web client, available, most of my time was spent on the command line over SSH. The documentation was limited, I spent most of my time writing procedures for the company I was at, quickly earning my self a promotion, and a new role as a storage engineer.

Today VMware is announcing that the next release of the vSphere product line will deprecate the C# client in favor of the web client. As I have gone through this process, both as a vExpert and a VMware employee, there have been many questions. During our pre-announcement call with the product team at VMware, there were a number of concerns voiced about what will work on day 1 and what this does to the customers who have come to rely on performance. Rather than focus on the actual changes, most of which are still to be determined, it seemed more helpful to talk about the future of managing systems, and the future of operations.


When I started working on server administration, the number of systems one admin might manage was pretty low, maybe less than a dozen. With the advent of virtualization and cloud native applications, devops and no-ops, administrators are managing farms of servers, most of them virtual. We often hear about pets vs. cattle, the concept that most of our servers are moving from being pets, something we care for as a part of our family, to cattle, something we use to make money, if one of our cattle have a problem, we don’t spend too much time on it, we have many others, we can just make more.

Whether it is a VMware product, Openstack, or another management tool, abstracting deployment and management of systems is becoming more mainstream, and more cost effective. In this model, a management client is far less important than APIs and the full stack management they can enable. For the few use cases where the client is needed, the web client will continue to improve, but the true value is these improvements will drive new APIs and new tools developed for managing systems. While change is never easy, a longer term view both where we came from, and where we are going with the interfaces reminds us this is a necessary change, and less impactful than it may seem at first glance.

Who moved my VMware C# Client?

Defining the cloud Part 4: Supported

As I try to bring this series to a close, I want to look at what I would consider one of the final high level requirements in evaluating a cloud solution.  In the previous posts, we looked at the cloud as being application centric, self service, and open.  These are critical, but one of the more important parts of any technology is support.  This is something which has plagued linux for years.  For many of us, linux and unix are considered to be far superior to windows for many reasons.  The challenge has been the support.  Certainly Red Hat has done a fairly good job of providing support around their Fedora Based Red Hat Enterprise Linux, but that is one distro.  Canonical provides some support around Ubuntu, and there are others.

The major challenge with the opensource community is just that, it is open.  Open is good, but when we look at the broader opensource community, many of the best tools are written and maintained by one person or a small group.  They provide some support for their systems, but often times that is done as a favor to the community, or for a very small fee, they need to keep day jobs to make that work.

One challenge which seems to be better understood with the cloud, especially around openstack, is the need for enterprise support.  More and more companies are starting to jump on board and provide support for openstack, or their variant.  This works well, so long as you only use the core modules which are common.  In order to make money, all companies want you to use their addons.  This leads to some interesting issues for customers who want to add automation on top of the cloud or other features not in the core.

At the end of the day, a compromise must be struck.  It is unlikely that most companies will use a single vendor for all their cloud software, although that could make it less challenging in some regards.  It comes down to trade offs, but it is certain that we will continue to see further definition and development around the cloud, and around enterprise support for technologies which further abstract us from the hardware and enable us to be more connected, use the data which is already being collected, and the devices which are being and will be developed for this crazy new world.

Defining the cloud Part 4: Supported

Defining the cloud Part 3: Open


This may seem like an odd topic for the cloud, but I think it is important.  One of the questions I have been asked many times when discussing cloud solutions with customers is around portability of virtual machines, and interoperability with other providers.  This of course raises some obvious concerns for companies who want to make money building or providing cloud services and platforms.

We live in a soundbite culture.  If it can’t be said in 140 characters or less, we don’t really want to read it.  Hopefully you are still reading at this point, this is way past a tweet.  We like monthly services versus owning a datacenter, who wants to pay for the equipment when you can just rent it in the cloud.  More and more services are popping up to make it simpler for us to rent houses for a day or a few days, get a taxi, rent a car by the mile, or a bike by the hour.  There is nothing wrong with this, but we need to understand the impact.  What if each car had different controls to steer, what if there was no standard?  How could the providers then create services, it is all based on an open and agreed upon standard.

In order for the cloud to be truly useful, it must be based on standards.  This is where OpenStack is the most important.  Going far beyond just a set of API’s, OpenStack enables us to have a core set of features that are common to everyone.  Of course in order to make money, beyond just selling support for this, many companies choose to add additional features which differentiate them.  This is not opensource, but still based on the open framework.  For most companies, this still uses open standards such as the rest API, and other standards based ways of consuming the service.  Even VMware, perhaps the largest cloud software provider, uses standard API’s, and supports popular tools for managing their systems.

Open standards, open API’s, and standards based management features are critical for the cloud.  Of course everyone wants you to choose their cloud, but to be honest, most of us consume multiple cloud services at once.  I use DropBox, Box.Net, Google Drive, Skydrive, and a few other cloud storage providers because they all have different use cases for me.  I use Netflix and Hulu Plus because they give me different content.  Why then should business consumers not use some AWS, some Google Enterprise Cloud, some HP Public Cloud, and perhaps even some of the other smaller providers?  For the cloud to continue to be of value, we will have to adjust to the multi service provider cloud, and everyone will have to compete on the best services, the best features, and the best value.


Defining the cloud Part 2: Self Service

In Defining the cloud Part 1: Applications, we discussed how applications are the reason for the cloud, and how we move abstraction from the servers to the applications.  Moving forward we now look at how the cloud should enable users to provision their own systems within given parameters.

Self Service

In the early days of virtualization, we were very excited because we were making IT departments more efficient. I remember IT managers actually telling young server admins to stall when creating virtual servers to prevent users from realizing how quickly it could be done. What took the IT department hours, weeks, or months previously, now was done with the press of a button, and a few minutes, assuming proper capacity planning.

IT is often seen as a cost center. For years now we have been preaching the gospel of IT As A Service, basically the concept that technology becomes a utility to the business. Nicholas Carr championed this concept in his book, The Big Switch. Basically he popularized the concept that much like electricity, technology was becoming something that should just work. IT is no longer just for those of us who understand it, but rather it becomes a tool that anyone can use just like flipping a switch to turn on a light, or turning on the TV.

So how do we make this happen? It is as simple as looking at the smart phone you have in front of you or in your pocket.  The thing that makes your phone so great is not the brand, not the operating system, not even the interface, the most important thing is the application ecosystem.  I can go on my phone and grab an app to do just about anything.  I don’t need to know how the system works, I just go grab apps and don’t really think about how they interact with the phone.

Imagine giving this to to our end users, simply give them an catalog to say what they need, a user wants to build an application, so they go to a catalog select from a pre-defined template, and the rest is handled by the system.  No IT intervention, no human interaction required, just a few simple clicks, almost like grabbing an app on a phone.  Their entire virtual infrastructure is built out for them and they are notified when it is complete.

So what does this all have to do with HP?  Stick with me on this, this is the future, this is HP Helion, and this is amazing.

Defining the cloud Part 2: Self Service

Software Defined Storage Replication

In a conversation recently with a colleague, we were discussing storage replication in a VMware environment.  Basically the customer in question had bought a competitors array, brand X, and wanted to see if there was a way to replicate to one of our arrays at a lower price point.

This is a fairly common question coming from customers, more so in the SMB space, but with the increasing popularity of Software Defined Storage, customers want options, they don’t want to be locked into a single vendor solution.  In an openstack environment, high availability is handled at the application level, and I have to say I strongly recommend this as a policy for all new applications, however how do we handle legacy apps in the interim?

In a traditional storage array, we typically do replication at the storage level.  VMware Site Recovery Manager allows us to automate the replication and recovery process integrating with the storage replication, and in smaller environments, can even handle replication at the vSphere host level. Array based replication is generally considered the most effecient, and the most recoverable. This does require similar arrays from the same vendor, with replication licensing. In a virtual environment this looks something like the picture below.


This works well, but can be costly and leads to storage vendor lockin, not a bad thing if you are a storage vendor, but not always the best solution from a consumer perspective. So how do we abstract the replication from the storage? Remember, one of the purposes of virtualization and openstack is to abstract as much as possible from the hardware layer. That is not to mean hardware is not important, quite the contrary, but it does enable us to become more flexible.

So to provide this abstraction there are a couple options. We can always rewrite the application, but that takes time, we can do replication at the file system level or similarly using a 3rd party software to move data, but in order to really abstract the replication from the hardware/software we need to insert something in the middle.

In the conversation I was having at the begining, the goal was to replicate from the production datacenter running brand X storage to a remote location using an HP storage product. To accomplish this, we discussed using vSphere replication, something I will discuss in a future post, we discussed host based replication, but that is not as seamless, and what we settled on is below. Not the most elegant solution, but something that helps us abstract the replication layer. Essentially using the HP StoreVirtual VSA, since it has replication built in, we put that in front of the brand X storage, and then on the other side we can put another VSA on a server with some large hard drives, and voila, replication and DR storage handled.

Storage_Replication_VSA - Edited

Not the most elegant solution, but it is a way to abstract the replication from the storage, and to do so at a reasonable cost. The advantage to this solution is that we have also given ourselves DR storage. Next I will explore vSphere replication, but as always I want to point out, this solution minimized vendor lock in on the hypervisor and storage levels.

Software Defined Storage Replication

The changing role of shared storage in the Software Defined Datacenter: Part 1

I was having a conversation the other day with some colleagues, around the future of our profession.  As you probably know by now, I have spent the better part of the past decade working on storage and virtualizaiton specifically.  More and more, I have been finding myself discussing the erosion of traditional storage in the market.  Certainly there will always be storage arrays, they have their effeciencies, and enabling storage to be shared between servers, and just in time provisioning as well as preventing stranded capacity which was a challenge for many of us in the not so distant past.

To demonstrate this, we should look at the traditional virtualizaiton model.  servers_shared_storage

We have traditionally used the shared storage array for redundancy, clustering, and minimizing storage waste.  When I was a storage administrator, I was very good at high performance databases.  We would use spindle count and raid type to make our databases keep up with the applications.  When I moved on to being a consultant, I found ways to not only give the performance needed, but also to minimize wasted space by using faster drives, tiering software, and virtualization to cram more data onto my storage arrays.  As above, in the traditional model, deduplicaiton, thin technologies, and similar solutions were of huge benefit to us.  It became all about efficiency, and speed.  With virtualization this was also a way to enable high availability and even distribution of resources.

What we have seen over the past several years, is a change in architecture known as software defined storage.

Software Defined Storage (StoreVirtual VSA)
Software Defined Storage

With SSD drives in excess of 900GB, and that size expected to continuously increase, with small form factor sata drives at 4TB and even larger drives coming, the way we think about storage is changing.  We can now use software to keep multiple copies of the data which allows us to simulate a large traditional storage array, and newer features such as tiering in the software brings us one step closer.

Ironically as I was writing this, @DuncanYB re-posted on twitter, an article he wrote a year ago, RE: Is VSA the future of Software Defined Storage? (OpenIO).  I do follow Duncan among several others quite closely, and I think what he is saying makes sense.  Interestingly, some of what he is talking about is being handled by Openstack, but that does introduce some other questions.  Among these are, is this something Openstack should be solving, or does this need to be larger than Openstack in order to gain wide adoption?  What is the role of traditional arrays in the Software Defined Datacenter?

Needless to say, this is a larger conversation than any of us, and it is highly subjective.  I would hope that the next few posts become part of the larger conversation, and I would hope that this will cause others to think, debate, and bring their ideas to the table.  As always I have no inside information, these are my personal thoughts, not those of my employer, or anyone other company.

The changing role of shared storage in the Software Defined Datacenter: Part 1

Software Defined Storage – Hype or the Future?

If you have a twitter account, or read any of VMware’s press releases, or any of the technical blogs, you have to know by now, VMware is back in the storage business with a vengeance.  As most of us will recall, the first rollout, their Virtual Storage Appliance, VSA, was less than exciting, so I have to admit when I first heard about vSan I was a little skeptical.  Of course over the past several months, we have watched things play out on social media with the competitors, arguments over software defined storage versus traditional hardware arrays, which begs the question, is Software Defined Storage all Hype, or is this the future of storage?

So as always, in the interests of full disclosure, I work for HP, who clearly has a dog in this fight, I have worked with VMware products for nearly ten years now, as a customer, a consultant, and in my current role speaking about VMware to customers, and in various public presentation forums as often as possible.  While I attempt to be unbiased, I do have some strong opinions on this.  That being said…

When I first touched VMware, I was a DBA/Systems Engineer at a Casino in Northern California.  We badly needed a lab environment to run some test updates in, and despite some begging and pleading, I was denied the opportunity to build the entire production environment in my test lab.  We debated going with workstations and building that way, but one of our managers had read about VMware, and wanted us to determine if we could use it for a lab, with the thought that we could virtualize some of our production servers.  Keep in mind this was in the early ESX 2 days, so things were pretty bare at that point, documentation was spotty, and management was nothing like we have today.  By the time we completed our analysis and were ready to go to production, ESX 3 was released and we were sold.  We were convinced that we would cut our physical infrastructure substantially, and we thought that servers would become a commodity.  While compute virtualization does reduce physical footprint, it does introduce additional challenges, and in most cases it simply changes the growth pattern, as infrastructure becomes easier to deploy, we experience virtual sprawl versus physical sprawl, which leads to growth of physical infrastructure.  Servers are far from a commodity today, server vendors are pushing harder to distinguish themselves and to go further, higher density, and give just a little bit more performance or value.  In the end, VMware’s compute virtualization just forced server vendors to take it to another level.

When VMware started talking about their idea of a vSan, I immediately started trying to find a way to get in on the early beta testing.  It was a compelling story, and I was eager to prove that VMware was going to fall short of my expectations again.  There is no way the leader in compute virtualizaiton can compete with storage manufacturers.  Besides, software defined storage was becoming fairly common in many environments, and something that is moving from test/dev into production environments, so the market was already pretty saturated.  As I started to research and test vSan for myself, as well as reading up on what the experts were saying about it, I was quite surprised.  This is a much different way of looking at software defined storage, especially where VMware is concerned.

At the end of the day there are a lot of choices out there from a software defined storage perspective.  The biggest difference is who is backing them.  When I was building my first home lab, and software defined storage was not really prime time, we used to play around with Openfiler and Freenas, which were great for home labs at the time.  They gave us iSCSI storage so we could test and demo, but I have only met a few customers using it for production, and they usually were asking me to help them get something with support to replace it.  The main difference with vSan, and the other commercially supported software defined storage implementations are the features.  The reality is that no matter what you choose, far more important than which is the best solution, is having enterprise level support.  The important thing is to look at the features, put aside all the hype, and decide what makes sense for your environment.

I don’t think we will see the end of traditional storage anytime soon, if ever, although I think in many environments, we will continue to see high availability move into the application layer and shared storage will become less of an issue, think Openstack.  I do think though that most of us will agree that software defined storage is the future, for now, so it is up to you, the consumer to decide what features make sense, and what vendor can support your environment for the term of the contract.

Software Defined Storage – Hype or the Future?

Converged Infrastructure and You

Pausing briefly from the VMware Storage topic, I thought it might be a good time to write a bit about converged infrastructure and how it is changing the role of IT.

I, like many of you, started out supporting and designing various systems.  I chose to specialize in storage and virtualization, and started down the path of becoming very focused.  When I started working on storage, I was offered the position by my manager at the time.  He told me it would be a great career move, people who specialized in storage at the time were making a good living, and had their choice of jobs.  At that time, most companies IT departments were very silo-ed, budgets were big, and departments were always growing.

When I moved into datacenter management and then into consulting, I began to realize that specializing would no longer work.  Around this time I read Nicholas Carr’s book, The Big Switch: Rewiring the World, from Edison to Google.  In thinking this through, it occurred to me that if IT is truly to become a utility as he suggests, it is not going to be possible for those of us involved in designing, building, supporting, and selling IT infrastructure to be specialized on a specific technology.  As I began to talk to companies, as I began to understand the concept of good enough.

Most of us in the IT field are perfectionists, or something close.  It is difficult for us to not design the perfect solution for every project or customer.  The problem is that most customers don’t have the money for the perfect solution.  Ideally every project should be designed by a team each with expertise in their respective areas.  It should be custom built for the client, and it should use a variety of technologies custom designed for their specific needs.

The reality is that most customers need a solution that is just good enough.  It doesn’t need to be the perfect fit, it doesn’t need to be fully customized.  Sure there are some customers who still want that level of customization, but often don’t want to pay for it.  With the growing popularity of OpenStack, IT Automation, and the desire to consume everything as a service, most business people don’t care what is under the covers.  Gone are the days when IT budgets rival that of a small city, or in some cases a small country, business people don’t care how smart you are, or how much you know, they don’t care if it is a mac or a pc, they want it to work, and they want it to be fast and simple, sorry Nick Burns.

Bringing this back to storage and virtualization, the promise of virtualization was breaking down the barriers between the different silos within IT, it was to bring more self service and more automation.  I am always amazed by the companies who go to great lengths to put the silos back in place.  The growing popularity of converged infrastructure means we don’t have the luxury of specializing.  O sure, there will always be opportunities in some very large companies to specialize, someone will always slip through the cracks, maintaining their old way of doing things, but for the majority of companies, for the majority of IT professionals, it is time to start diversifying.

For most of us this means expanding out of our comfort zones.  I personally am spending a little time each week forcing myself to learn more about my weaker areas, networking and big data, as well as brushing up on my server, cloud, and virtualization knowledge.  I will never be an expert in everything, but if I can help customers get to just good enough, at a price point they can afford, then that is worth more than the perfect solution they will never buy.  This is not to say that I won’t try my best, ask for help, and engage my peers with expertise in other areas, but this does mean that I can no longer afford to consider the areas I don’t typically work in to be someone else’s responsibility.  It is time for us to break down the barriers, start talking, and cross training in each others jobs, and bringing some value to the business.  Converged infrastructure means we all have to learn the entire stack and be able to manage everything.

Converged Infrastructure and You