To home lab or not to home lab

As I often do, I am again debating my need for a home lab.  My job is highly technical, to take technology architecture and tie it all together with the strategic goals of my customers.  Keeping my technical skills up to date is a full time job in and of itself, and begs the question, should I build out a home lab, or are my cloud based labs sufficient.

One of the perks to working at a large company is the ability to use our internal lab systems.  This can also include my laptop with VMware Workstation or Fusion product which affords some limited testing capabilities, mostly due to memory constraints.  Most of the places I have been have had great internal labs, demo gear, etc, which has been nice.  I have often maintained my own equipment as well, but to what end.  Keeping the equipment up to date becomes a full time job, and adds little value to my daily job.

With the competition in cloud providers, many providers will provide low or no cost environments for testing.  While this is not always ideal, for the most part, we are now able to run nested virtual systems, testing various hypervisors, and other solutions.  Many companies are now providing virtual appliance based products which enable us to stay fairly up to date.

Of course one of my favorites is VMware’s Hands on Labs.  In fairness I am a bit biased, working at VMware, and with the hands on labs team as often as I can.  Since a large majority of what I do centers around VMware’s technology, I will often run through the labs myself to stay sharp on the technology.

While the home lab will always have a special place in my heart, and while I am growing a rather large collection of raspberry pi devices, I think my home lab will be limited to smaller lower power devices for IoT testing for the moment.  While always subject to change, it is tough to justify the capital expenditure when there are so many good alternatives.

To home lab or not to home lab

Enterprise Architecture – When is good enough, good enough?

In a conversation with a large customer recently we were discussions their enterprise architecture.  A new CIO had come in and wants to move them to a converged infrastructure.  I digging into what their environment was going to look like as they migrated, and why they wanted to make that move.  It came down to a good enough design versus maximizing hardware efficiency.  Rather than trying to squeeze every bit of efficiency out of the systems, they were looking at how could they deploy a standard, and get a high degree of efficiency but the focus was more on time to market with new features.

My first foray into enterprise architecture was early in my career at a casino.  I moved from a DBA role to a storage engineer position vacated by my new manager.  I spend most of my time designing for performance to resolve poorly coded applications.  As applications improved, I started to push application teams and vendors to fix the code on their side.  As I started to work on the virtualization infrastructure design for this and other companies, I took pride in driving CPU and memory as hard as I could.  Getting as close to maxing out the systems while providing enough overhead for failover.  We kept putting more and more virtual systems into fewer and fewer servers.  In hindsight we spent far more time designing, deploying, and managing our individual snowflake hosts and guests that what we were saving in capital costs.  We were masters of “straining the gnat to swallow the camel”.

Good enterprise design should always take advantage of new technologies.  Enterprise architects must be looking at roadmaps to prevent obsolescence.  With the increased rate of change, just think about unikernel vs containers vs virtual machines, we are moving faster than our hardware refresh cycles on all of our infrastructure.

This doesn’t mean that converged or hyper-converged infrastructure is better or worse, it is an option, but one that is restrictive since the vendor must certify your chosen hypervisor, management software, automation software, etc. with each part of the system they put together.  On the other hand, building your own requires you do that.
The best solution is going to come with compromises.  We cannot continue to look at virtual machines or services per physical host.  Time to market for new or updated features are the new infrastructure metric.  The application teams ability to deploy packaged or developed software is what matters.  For those of us who grew up as infrastructure engineers and architects, we need to change our thinking, change our focus, and continue to add value by being partners to our development and application admin brethren.  That is how we truly add business value.

Enterprise Architecture – When is good enough, good enough?

There can be only one…or at least less than there are now.

Since the recent announcement  of Dell acquiring EMC, there has been great speculation on the future of the storage industry.  In previous articles I have observed that small storage startups are eating the world of big storage.  I suspect that this trend had something to do with the position EMC found themselves in recently.

Watching Nimble, Pure, and a few others IPO recently, one cannot help but notice there are still far more storage vendors standing, with new ones coming out regularly, and the storage market has not consolidated as we thought it would.  During recent conversations with some of the sales teams for  a couple storage startups, we discussed what their act two was to be.  I was surprised to learn that for a number of them, it is simply more of the same, perhaps less a less expensive solution to sell down market, perhaps some new features, but nothing really new.

Looking at the landscape, there has to be a “quickening” eventually.  With EMC being acquired, HP not doing a stellar job of marketing the 3Par product they acquired, Netapp floundering, and Cisco killing their Whiptail acquisition, we are in a sea of storage vendors with no end in sight.  HP splitting into two companies bodes well for their storage division, but the biggest challenge for most of these vendors is they are focused on hardware.

For most of the storage vendors, it is likely that lack of customers will eventually drive them out of business when the finally run out of funding.  For some, they will survive, get acquired, or merge to create a larger storage company, and probably go away eventually anyway.  For a few they will continue to operate in their niche, but for the ones who intend to have long term viability, it is likely they are going to need to find a better act two, something akin to hyper converged infrastructure, or more likely simply move to a software approach.  While neither are a guarantee, they do have higher margins, and are more inline with where the industry is moving.

We are clearly at a point where hardware is becoming commoditized.  If your storage array can’t provide performance, and most of the features we now assume to be standard, then you shouldn’t even bother coming to the table.  The differentiation has to be something else, something outside the norm.  Provide some additional value with the data, turn it into software, integrate it with other software, make it standards based.  Being the best technology, the cheapest price, or simply the biggest company doesn’t matter any more.  Storage startups, watch out, your 800lb gorilla of a nemesis being acquired might make you even bigger targets.  You better come up with something now or your days are numbered.

There can be only one…or at least less than there are now.

The times they are a changin

Disclaimer: I am a VMware employee. This is my opinion, and has been my opinion for some time prior to joining the company. Anything I write on this blog may not be reflective of VMware’s strategy or their products.

With this weeks announcements from VMware, there has been a great deal of confusion on what made it into the release. So as not to add to it, I wanted to focus more on something you likely missed if you weren’t watching closely. As I said in the disclaimer, this is not a VMware specific post, but they do seem to be in the lead here.

For many years I was big on building management infrastructure. It was an easy gig as a consultant, it scales and it is fairly similar from environment to environment. Looking back, it is a little funny to think about how hardware vendors did this. First they sell you servers, then they sell you more servers to manage the servers they sold you, plus some software to monitor. When we built out virtual environments we did the same thing. It was great, we did less physical servers, but the concept was the same.

If you pay close attention to the trends with the larger cloud providers, we are seeing a big push toward hybrid cloud. Now this is not remarkable unless we look closer at management. The biggest value to hybrid cloud, used to be that we could burst workloads to the cloud. As more businesses move to some form of hybird cloud, it seems that the larger value is not being locked into on premise cloud management software.

At VMworld 2014, as well as during the launch this week, VMware touted their vCloud Air product. Whether you like the product or not, the thing that caught my eye is the outside model of management. Rather than standing up a management system inside the datacenter, simply lease the appropriate management systems and software. Don’t like your provider, great get another. Again I want to point out, I am using VMware as my example here, but there are others doing the same thing, just not on the same scale yet.

While this is not going to be right for everyone, we need to start rethinking how we manage our environments.  The times they are a changin.

The times they are a changin

The universe is big. It’s vast and complicated and ridiculous.

As I was meeting with a customer recently, we got onto the topic of workload portability. It was interesting, we were discussing the various cloud providers, AWS, Azure, and VMware’s vCloud Air, primarily, and how could they, a VMware shop, move workloads in and out of various cloud providers.

Most industry analysts, and those of us on the front lines trying to make this all work, or help our customers make it work, will agree that we are in a transition phase. Many people smarter than I have talked at length about how virtualization and infrastructure as a service is a bridge to get us to a new way of application development and delivery, one where all applications are delivered from the cloud, and where development is constant and iterative. Imagine patch Tuesday every hour every day…

So how do we get there? Well if virtualization is simply a bridge, that begs the question of portability of workloads, virtual machines in this case. Looking at the problem objectively, we have started down that path previously with the Open Virtualization Format (OVF), but that requires a powered off Virtual Machine which is then exported, copied, and then imported to the new system which creates the proper format as part of the import process. But why can’t we just live migrate workloads without downtime between disparate hypervisors and clouds?

From my perspective the answer is simple, it is coming, it has to, but the vendors will hold out as long as they can. For some companies, the hypervisor battle is still waging. I think it is safe to say we are seeing the commoditization of the hypervisor. As we look at VMware’s products, they are moving from being a hypervisor company, again nothing insider here, just review the expansion into cloud management, network and storage virtualization, application delivery, and so much more, but more and more they are able to manage other vendors hypervisors. We are seeing more focus on “Cloud Management Platforms”, and everyone wants to manage any hypervisor. It has to follow then that some standards emerge around the hypervisor, virtual hard drives, the whole stack so we can start moving within our own datacenters.

This does seem counter intuitive, but if we put this into perspective, there is very little advantage in consolidation at this point. Most companies are as consolidated as they will get, we are now just working to get many of them to the final 10% or so. It is rare to find a company who is not virtualizing production workloads now, so now we need to look at what is next. Standards must prevail as they have in the physical compute, network, and storage platforms. This doesn’t negate the value of the hypervisor, but it does provide for choice, and differentiation around features and support.

I don’t suspect we will see this happen anytime soon, but it begs the question of why not? It would seem to be the logical progression.

The universe is big. It’s vast and complicated and ridiculous.

EVO Rail, is technology really making things easier for us?

This week at VMworld, the announcement of what had been Project Marvin became official.  I wanted to add my voice to the debate on the use case for this, and where I believe the industry goes with products like this.  To answer the title question, EVO is a step in the right direction, but it is not the end of the evolution.  As always I have no inside information, I am not speaking on behalf of VMware, this is my opinion on where the industry goes and what I think is cool and fun.

To understand this, we need to consider something my wife said recently.  As a teacher, she was a bit frustrated this week to return to school to find her laptop re-imaged, and her printer was not configured.  I tried to help her remotely, but it is something I will need to work on when I get back.  Her comment was, “Technology is supposed to make things easier”.  This stung for a moment, after all technology is my life, but when I thought about her perspective, it struck me just how right she is.  Why afterall shouldn’t the laptop have reached out, discovered a printer near by and been prepared to print to it, afterall, my iPhone/iPad can do that with no configuration on the device itself.

So what does this have to do with EVO?  If we look at EVO as a stand alone product, it doesn’t quite add up.  It is essentially a faster way of implimenting a product which is not too complicated to install.  I have personally installed thousands of Nodes of vSphere, hundreds of vCenters, it is pretty simple with a proper design.  The real value here though, the trend, is simplification.  Just because I know how to build a computer, doesn’t mean I want to.  Just because I can easily impliment a massive vSphere environment, that doesn’t mean I want to go through the steps.  That is why scripting is so popular, it enables us to do repetetive tasks more effeciently.

The second part of this though really comes down to a vision, where are we going.  If you look at where we are going as an industry, we are moving to do more at the application layer in terms of high availability, disaster recovery, and performance.  We see this with the openstack movement, the cloud movement, docker, and so many others.  At some point, we are going to stop worrying about highly available infrastructure.  At some point our applications will all work everywhere, and if the infrastructure fails, we will be redirected to infrastructure in another location without realizing it.  

That is the future, but for now we have to find a way to hide the complexity from our users, and still provide the infrastructure.  We need to scale faster, better, stronger, and more resilient, without impacting legacy applications.  Someday we will all be free from our devices, and use what ever is in our hand, or in front of us, or just get a chip in our brains, someday HA won’t be an infrastructure issue, but until then projects like EVO will help us to bridge that gap.  Not perfect arguably, but this is a bridge to get us a step closer to a better world.  At the end of the day the more complexity we hide with software, the better we are, provide that software is solid, and we can continusiouly improve.

EVO Rail, is technology really making things easier for us?

Defining the cloud Part 4: Supported

As I try to bring this series to a close, I want to look at what I would consider one of the final high level requirements in evaluating a cloud solution.  In the previous posts, we looked at the cloud as being application centric, self service, and open.  These are critical, but one of the more important parts of any technology is support.  This is something which has plagued linux for years.  For many of us, linux and unix are considered to be far superior to windows for many reasons.  The challenge has been the support.  Certainly Red Hat has done a fairly good job of providing support around their Fedora Based Red Hat Enterprise Linux, but that is one distro.  Canonical provides some support around Ubuntu, and there are others.

The major challenge with the opensource community is just that, it is open.  Open is good, but when we look at the broader opensource community, many of the best tools are written and maintained by one person or a small group.  They provide some support for their systems, but often times that is done as a favor to the community, or for a very small fee, they need to keep day jobs to make that work.

One challenge which seems to be better understood with the cloud, especially around openstack, is the need for enterprise support.  More and more companies are starting to jump on board and provide support for openstack, or their variant.  This works well, so long as you only use the core modules which are common.  In order to make money, all companies want you to use their addons.  This leads to some interesting issues for customers who want to add automation on top of the cloud or other features not in the core.

At the end of the day, a compromise must be struck.  It is unlikely that most companies will use a single vendor for all their cloud software, although that could make it less challenging in some regards.  It comes down to trade offs, but it is certain that we will continue to see further definition and development around the cloud, and around enterprise support for technologies which further abstract us from the hardware and enable us to be more connected, use the data which is already being collected, and the devices which are being and will be developed for this crazy new world.

Defining the cloud Part 4: Supported