Enterprise Architecture – When is good enough, good enough?

In a conversation with a large customer recently we were discussions their enterprise architecture.  A new CIO had come in and wants to move them to a converged infrastructure.  I digging into what their environment was going to look like as they migrated, and why they wanted to make that move.  It came down to a good enough design versus maximizing hardware efficiency.  Rather than trying to squeeze every bit of efficiency out of the systems, they were looking at how could they deploy a standard, and get a high degree of efficiency but the focus was more on time to market with new features.

My first foray into enterprise architecture was early in my career at a casino.  I moved from a DBA role to a storage engineer position vacated by my new manager.  I spend most of my time designing for performance to resolve poorly coded applications.  As applications improved, I started to push application teams and vendors to fix the code on their side.  As I started to work on the virtualization infrastructure design for this and other companies, I took pride in driving CPU and memory as hard as I could.  Getting as close to maxing out the systems while providing enough overhead for failover.  We kept putting more and more virtual systems into fewer and fewer servers.  In hindsight we spent far more time designing, deploying, and managing our individual snowflake hosts and guests that what we were saving in capital costs.  We were masters of “straining the gnat to swallow the camel”.

Good enterprise design should always take advantage of new technologies.  Enterprise architects must be looking at roadmaps to prevent obsolescence.  With the increased rate of change, just think about unikernel vs containers vs virtual machines, we are moving faster than our hardware refresh cycles on all of our infrastructure.

This doesn’t mean that converged or hyper-converged infrastructure is better or worse, it is an option, but one that is restrictive since the vendor must certify your chosen hypervisor, management software, automation software, etc. with each part of the system they put together.  On the other hand, building your own requires you do that.
The best solution is going to come with compromises.  We cannot continue to look at virtual machines or services per physical host.  Time to market for new or updated features are the new infrastructure metric.  The application teams ability to deploy packaged or developed software is what matters.  For those of us who grew up as infrastructure engineers and architects, we need to change our thinking, change our focus, and continue to add value by being partners to our development and application admin brethren.  That is how we truly add business value.

Advertisements
Enterprise Architecture – When is good enough, good enough?

AUTOMATING MY HOME PART 6: Wireless Security cameras revisited, wireless segmentation, and flood lights

Previously I talked about looking a the Lorex security camera system using wired Power over Ethernet, PoE.  From a physical security perspective, and a not being restricted by the location of power outlets it made sense.

As usual though the “Wife Acceptance Factor” was the real test.  I have learned she, as my primary user, gives me the best feedback on what is a good idea.  I started asking her what she wanted out of a camera system.  It turn out, it is less about security, and more about her checking in on the kids when they got home from school, and looking in on the dog, and making sure the front door was closed.  This lead us to look at wireless cameras, due to the challenges of running wires between floors of the house.

We settled on the Samsung Smartcam HD for the simple reason it supports local recording to an SD card.  While this was not critical, it was helpful for some short term replays to keep an eye on the house, but more importantly to capture hilarious incidents like nerf gun wars with the kids, or she and I trying to get our Christmas tree out of the house.  The quality is very solid, and we have had no major complaints after 6 weeks now.

With the cameras and the ecobee3 thermostat, we started to see a number of devices that simply need internet access, but do not need to access the local lan.  When I put in the Ubiquity WAP I initially gave us 3 SSID’s all on separate VLAN’s, the main wireless for media and work, the kids, and the guest network.  This weekend I added the fourth for our home automation devices.  While this is not strictly necessary, it is nice to keep them separated from the rest of our devices, and limits our exposure if there is a problem.

Recently we had some weird sounds in our backyard well after dark.  It sounded like a bobcat or a coyote had gotten ahold of a house cat or small dog.  We checked in the morning and didn’t see signs of anything, but to be honest it was a bit disconcerting.  My newest project is to put up floodlights in the back yard, enough to scare off anything that comes in to visit.  The main challenge has been how to make it look professional, and how to involve some type of automation.  I looked at pure motion sensors, but that didn’t seem to be what we wanted.  I am considering some type of smart lighting system, but ultimately it may come down to a simple remote switch connected to the light.

The backyard lighting gets more important as we move into spring and summer where we will get more time outdoors.  We are also building a fire pit soon, not automation related, but I am working on how to include something electronic, probably a bluetooth speaker or something similar.  Always fun, and always one more project.

AUTOMATING MY HOME PART 6: Wireless Security cameras revisited, wireless segmentation, and flood lights

Ravello Systems: a very good replacement for home labs, almost.

As a vExpert I have been privileged to have the use of Ravello Systems, https://www.ravellosystems.com/.  For those not familiar with it, basically they front end AWS and Google Cloud Platform enabling you to run most modern operating systems with a simple interface, including VMware vSphere.

As a technologist, I always have a number of projects going.  I am a hands on type of person, and I like to understand how things work by building and breaking them.  This normally happens on various lab equipment I purchase, or inherit, which works for the most part, although it is an expensive hobby.

Ravello Systems was intriguing beyond AWS or Google Cloud mainly because of their simple interface, blueprint based approach, and the ease of spinning up a quick vSphere lab, or even some random things that I needed to test such as a Vyatta based firewall, don’t ask.  The most time consuming part of creating a system was simply the time it took to upload ISO’s if I needed something custom.  The price for what I do is pretty reasonable, when you consider the cost of the infrastructure, and the time I am actually running things in the lab.

Of course no system is perfect.  My biggest issue was the inability to run a VMware vCenter 6 Appliance in their cloud.  I tried a number of hacks, but the only thing that worked was running it nested, which was just too slow for what I needed.  I also struggled with some security concerns, not their issue, my own concerns when I debated testing my Unifi controller for my home wifi as a cloud service.

One of my favorite uses was digging deeper into docker.  While I can deploy containers as VM’s in Fusion on my laptop, it seemed more logical to run some tests that were actually in the cloud.  Impressively simple again, and reasonable responsive since they were running as nested systems.

Going forward, the future of not just labs, but many production applications is likely spread across multiple cloud service providers and probably some internal systems.  For my purposes this model works quite well.  I appreciate this service from Ravello Systems, and I would suggest that this could make a good home lab replacement if we can just stop hugging our home lab systems.

Ravello Systems: a very good replacement for home labs, almost.