The changing role of shared storage in the Software Defined Datacenter: Part 1

I was having a conversation the other day with some colleagues, around the future of our profession.  As you probably know by now, I have spent the better part of the past decade working on storage and virtualizaiton specifically.  More and more, I have been finding myself discussing the erosion of traditional storage in the market.  Certainly there will always be storage arrays, they have their effeciencies, and enabling storage to be shared between servers, and just in time provisioning as well as preventing stranded capacity which was a challenge for many of us in the not so distant past.

To demonstrate this, we should look at the traditional virtualizaiton model.  servers_shared_storage

We have traditionally used the shared storage array for redundancy, clustering, and minimizing storage waste.  When I was a storage administrator, I was very good at high performance databases.  We would use spindle count and raid type to make our databases keep up with the applications.  When I moved on to being a consultant, I found ways to not only give the performance needed, but also to minimize wasted space by using faster drives, tiering software, and virtualization to cram more data onto my storage arrays.  As above, in the traditional model, deduplicaiton, thin technologies, and similar solutions were of huge benefit to us.  It became all about efficiency, and speed.  With virtualization this was also a way to enable high availability and even distribution of resources.

What we have seen over the past several years, is a change in architecture known as software defined storage.

Software Defined Storage (StoreVirtual VSA)
Software Defined Storage

With SSD drives in excess of 900GB, and that size expected to continuously increase, with small form factor sata drives at 4TB and even larger drives coming, the way we think about storage is changing.  We can now use software to keep multiple copies of the data which allows us to simulate a large traditional storage array, and newer features such as tiering in the software brings us one step closer.

Ironically as I was writing this, @DuncanYB re-posted on twitter, an article he wrote a year ago, RE: Is VSA the future of Software Defined Storage? (OpenIO).  I do follow Duncan among several others quite closely, and I think what he is saying makes sense.  Interestingly, some of what he is talking about is being handled by Openstack, but that does introduce some other questions.  Among these are, is this something Openstack should be solving, or does this need to be larger than Openstack in order to gain wide adoption?  What is the role of traditional arrays in the Software Defined Datacenter?

Needless to say, this is a larger conversation than any of us, and it is highly subjective.  I would hope that the next few posts become part of the larger conversation, and I would hope that this will cause others to think, debate, and bring their ideas to the table.  As always I have no inside information, these are my personal thoughts, not those of my employer, or anyone other company.

The changing role of shared storage in the Software Defined Datacenter: Part 1

Forget converged systems and the cloud, what’s really important are the apps.

My wife is a special education teacher, uses technology at school and at home, but looks to just get the job done, and is not interested in all the flashy features a product may offer.  She does try to love technology for my sake, but I can tell when I get some new idea or piece of technology to test, she is just not excited about it.  She is, however, long suffering and patient with my crazy ideas, which makes her an excellent sounding board for many of my theories or concepts.  One thing I have learned from our conversations is that she doesn’t care what the technology is, she is very much more concerned about the apps.

I remember when I first convinced her to take my old iPhone.  She had previously used an LG VUE, a basic phone with a slid QWERTY keyboard.  My company had given me a new iPhone 4s, so I dropped my personal line, and was left with my iPhone 4 no longer in use.  She agreed to try it, so I set her account up and started downloading some apps she typically used on the PC.  This was the summer she also decided to get serious about running, so she loved the Nike + app, and the Chase Mobile Banking app became very convenient.  Ironically, a few months after I gave her the phone I was in the middle of a job transition, and without a work phone for a few weeks.  I immediately started going through smartphone withdrawals, so I asked her if we could switch back for a few weeks till I got a new work phone.  Needless to say that was a mistake, as soon as she was back on the standard phone, she missed all her apps.

The following summer, against her will, I convinced her to upgrade to the iPhone5.  Her only concern was, what will change?  Will my apps be there?  There was no thought to the faster technology, there was no wow factor, she just wanted to know that her apps would all be there.  We had a similar issue when she finally upgraded from ios 6 –> ios 7, what will become of my apps?

This is a long winded story, but I think it makes a good point.  At the end of the day, what I care about is shiny new technology.  Is it faster, is it cooler, what can I make it do?  For the consumers of technology though, it isn’t about the cloud, it isn’t about converged systems, those are simply words to them. It is about can I have my apps,  can I have them everywhere, on every device, and can I have them now?

I would leave you with this thought, as the providers of the applications, or the infrastructure on which the applications run, it is long past time for us to stop thinking about how cool our technology is, about who makes a better system or product, but rather about what is the users expectations, and how can we provide them the experience they need without regard for what device they are on, or what the platform or technology they prefer.  This is the real reason for cloud, for converged system, and for the time we all put into making this all work.

Forget converged systems and the cloud, what’s really important are the apps.

Software Defined Storage – Hype or the Future?

If you have a twitter account, or read any of VMware’s press releases, or any of the technical blogs, you have to know by now, VMware is back in the storage business with a vengeance.  As most of us will recall, the first rollout, their Virtual Storage Appliance, VSA, was less than exciting, so I have to admit when I first heard about vSan I was a little skeptical.  Of course over the past several months, we have watched things play out on social media with the competitors, arguments over software defined storage versus traditional hardware arrays, which begs the question, is Software Defined Storage all Hype, or is this the future of storage?

So as always, in the interests of full disclosure, I work for HP, who clearly has a dog in this fight, I have worked with VMware products for nearly ten years now, as a customer, a consultant, and in my current role speaking about VMware to customers, and in various public presentation forums as often as possible.  While I attempt to be unbiased, I do have some strong opinions on this.  That being said…

When I first touched VMware, I was a DBA/Systems Engineer at a Casino in Northern California.  We badly needed a lab environment to run some test updates in, and despite some begging and pleading, I was denied the opportunity to build the entire production environment in my test lab.  We debated going with workstations and building that way, but one of our managers had read about VMware, and wanted us to determine if we could use it for a lab, with the thought that we could virtualize some of our production servers.  Keep in mind this was in the early ESX 2 days, so things were pretty bare at that point, documentation was spotty, and management was nothing like we have today.  By the time we completed our analysis and were ready to go to production, ESX 3 was released and we were sold.  We were convinced that we would cut our physical infrastructure substantially, and we thought that servers would become a commodity.  While compute virtualization does reduce physical footprint, it does introduce additional challenges, and in most cases it simply changes the growth pattern, as infrastructure becomes easier to deploy, we experience virtual sprawl versus physical sprawl, which leads to growth of physical infrastructure.  Servers are far from a commodity today, server vendors are pushing harder to distinguish themselves and to go further, higher density, and give just a little bit more performance or value.  In the end, VMware’s compute virtualization just forced server vendors to take it to another level.

When VMware started talking about their idea of a vSan, I immediately started trying to find a way to get in on the early beta testing.  It was a compelling story, and I was eager to prove that VMware was going to fall short of my expectations again.  There is no way the leader in compute virtualizaiton can compete with storage manufacturers.  Besides, software defined storage was becoming fairly common in many environments, and something that is moving from test/dev into production environments, so the market was already pretty saturated.  As I started to research and test vSan for myself, as well as reading up on what the experts were saying about it, I was quite surprised.  This is a much different way of looking at software defined storage, especially where VMware is concerned.

At the end of the day there are a lot of choices out there from a software defined storage perspective.  The biggest difference is who is backing them.  When I was building my first home lab, and software defined storage was not really prime time, we used to play around with Openfiler and Freenas, which were great for home labs at the time.  They gave us iSCSI storage so we could test and demo, but I have only met a few customers using it for production, and they usually were asking me to help them get something with support to replace it.  The main difference with vSan, and the other commercially supported software defined storage implementations are the features.  The reality is that no matter what you choose, far more important than which is the best solution, is having enterprise level support.  The important thing is to look at the features, put aside all the hype, and decide what makes sense for your environment.

I don’t think we will see the end of traditional storage anytime soon, if ever, although I think in many environments, we will continue to see high availability move into the application layer and shared storage will become less of an issue, think Openstack.  I do think though that most of us will agree that software defined storage is the future, for now, so it is up to you, the consumer to decide what features make sense, and what vendor can support your environment for the term of the contract.

Software Defined Storage – Hype or the Future?

Converged Systems – More than the sum of its parts

In the interests of full disclosure, I work for Hewlett-Packard, so my this is my semi-unbiased opinion, but I do spend my days talking about HP products, so I am not completely independent, but this has less to do with product, and more to do with concepts and standards.

Over the past few years, we have seen a number of vendors releasing converged systems, pods, blocks, and other types of systems which are essentially designed to simplify the ordering, provisioning, and support processes.  In my job I am fortunate enough to speak with many smart people, as I was discussing this trend with technical sales person, they asked me why would anyone buy a system this way when it would be cheaper to purchase the components and build it like we always have.  Why would anyone want to pay more to get one of these systems?

To get the answer we really need to determine what the motives are.  I posted previously about converged infrastructure, and I do tend to talk about the cloud, automation, and the need for a new, more efficient way of deploying infrastructure.  The best part about statistics is that they can always make your point for you, but having worked in IT for over 20 years in many roles, I believe it is safe to say IT typically spends anywhere from 70-80% of their time on operations.  That is just keeping the lights on.  To put that into terms of $$’s, that means if my IT budget, excluding salary, is $1m, I am spending $700k-800k on keeping the lights on.  That also means out of a 40 hour work week, yeah right, they are spending between 28-32 hours on support, and basic operational tasks, not making the business productive, implementing new projects, or planning for future.  This lack of time for innovation creates delays when new projects need to be done, and is counter productive.  To solve this, you either wait, hire more people, or bring in temporary help in the form of vendors or consultants to do some work for you.  If you do bring in the vendors though, or even a consultant, they will often stand up their components, but it is rare to find one who builds you out a solution and hands it to you ready to install your applications.

One of the values of a converged system, no matter who it comes from, is having a complete system.  I like the analogy of my computer.  I am pretty technical, and I love tinkering.  I used to love building Gaming PC’s.  I would spend months planning and designing, order the parts, and then spend hours painstakingly assembling the system.  Now I purchase the computer I want, and when it is too slow I purchase something faster.  I know when it arrives I will install apps, I might even put on a new operating system, but other than memory or hard drive upgrades, I typically don’t mess with the hardware.  It is just not worth the time, besides, with some of the newer systems, I can do much more using VMware workstation, or using Crouton on my Chromebook, so I can run virtual systems or different environments on a single system.  The concept behind the converged system is that unlike a reference architecture, you aren’t building from parts, but rather purchasing the whole system along with the services to stand it up.  Then, much like a shiny new computer, it is handed to you all ready to go, just add your applications.  For a majority of systems, the hypervisor is already in place,  with VMware still preferred by many.

There are many other benefits to explore, but the key is to remember, sometimes there are other considerations outside the cost of hardware, sometimes you have to consider what the opportunity cost of building your own systems can be.

Converged Systems – More than the sum of its parts

Living with a Chromebook

A few months ago I wrote about my HP Chromebook and some of it’s advantages and limitations.  It has been a bit and I have been doing more with it, and I thought it would be interesting to share my experiences.

So previously I was using Crouton to enable a full linux desktop, using LXDE as my preferred environment.  I have since changed to using gnome, I really like the minimalist interface it provides me.  For the most part though I was using the Chromebook for web browsing, quick lookups, watching netflix or hulu while I was working, or other basic things.  I would occasionally use the Linux desktop if I wanted to setup an eclipse environment to test something, but for the most part it was disposable.

A couple weeks ago, my work laptop bit the dust.  In it’s defense, I am not very easy on my equipment, I drive it to it’s max and expect it to perform, so it may have been me pushing it a little too hard, but come on this is what I do.  I was pretty ticked, but luckily it was a few days before I went on PTO, and I still had my iPad and Chromebook.  That got me thinking though, mostly because it took Corporate IT 1-2 business days to get me a replacement.  Now I know that is pretty good since I am a remote employee, and it had to come across the country, but in my world that is an eternity.  I mean overnight shipping is too slow, I want a stinkin replicator so I don’t have to wait.

I did a little digging, trying to find a way to make my life a little less dependent on windows, and to avoid buying yet another personal computer, which would not have gone well with my wife.  I found this article,, which led me to install thunderbird and run my e-mail through that for about an hour, which was not a positive experience.  I just didn’t like the interface, and the lack of proper calendaring support was not good.  Then I realized that the same concept would hold true for evolution.  I did have to dig through some ubuntu posts to find out that ews has to be installed separately, but it worked like a charm.  Much better.

I have since removed xterm in favor of gnome terminal, added the Chrome web browser, and a couple other tweeks, but for the most part it is great.  I still switch back to the Chrome Desktop for basic browsing, it is just faster and easier, but that is a simple keystroke, no reboot.

All in all, I am pretty happy with this setup.  I love the new gnome interface and how clean it is.  I love that the system is faster than my windows PC, and I am very happy with the apps I have so far.  If I need windows Apps I just remote into my issued windows 8 laptop, but I am doing more and more work from my Chromebook.  Kudos to the HP and Google teams for the efforts they put into this, I am becoming more and more enamored with the simplicity of this setup, and with the ability to change as needed.  Let’s see where I am on this in a few more months.



Living with a Chromebook

How relevant is VMware in an Openstack world?

Last August, during VMworld, I had the honor to sit in a briefing with VMware’s Paul Strong, VP, Office of the CTO. As always he was engaging, and incredibly knowledgable. When he asked if we had any questions, I casually glanced around, and when none of my peers had anything to ask, I figured why not.

“What is the impact of Openstack on VMware, do you see it as a threat?”

The answer was not what I was looking for, he talked about the marketplace, how they compete on some fronts, but compliment on others, not quite what I wanted, but it was the right answer, I just didn’t realize it yet.

I have been doing more work recently with customers wanting to do internal private cloud, which of course means something different to each of them, but from my view, the critical piece is a self service portal. Enabling users to make requests through a website and removing the requirement for IT intervention, or minimizing it at the least.

What I thought of Openstack, when I asked the question, is that it would invalidate VMware, and all the things we have been talking about for years. As I have been working with this more, I think VMware becomes more relevant in these environments. While it is fun to put KVM in a lab and run Openstack on top of it, it is not at the same level. Openstack itself struggles with commercial support, with HP offering one of the more viable enterprise solutions.

In the end it all comes back to the GNU Manifesto, give the software away and charge for the support, thus those who want it for free can have it, but for most companies it makes more sense to get something with enterprise support.

So to answer the question, I would say that VMware, on many levels makes sense, adding Openstack on top of VMware simply opens more doors to have a well supported private or hybrid cloud environment.

How relevant is VMware in an Openstack world?