HP Chromebook 14 LTE

Changing gears from the normal VMware storage posts, I wanted to talk about a new toy. After much deliberation I recently decided to get an HP Chromebook. It was between this, an HP Android tablet with a keyboard, and a 14 in HP Laptop, and at the end of the day it came down price, weight, and function. I finally settled on the 14 inch Chromebook with LTE, primarily because it comes with 4GB of RAM, and is very light weight, but still gives me enough power to do what I need.

So a little about my daily life, I spend a majority of my time running between meetings, taking notes on my iPad, and sending e-mails as fast as I can type. I have tried many methods for remoting into my work laptop which typically sits on my desk in my home office form my iPad, but I just can’t permanently make the switch, so I am stuck carrying a workstation replacement laptop, just incase I need something and can’t get the functionality from remote access using my iPad.

I did quite a bit of research on this, and identified two use cases for the Chromebook that set it apart and made it my choice. First of all, the 14 in screen makes it preferable for remote access to my work laptop much easier than from the smaller screen on my iPad. While I have a bluetooth keyboard for my iPad, I do not have a mouse, so I rely on the touch screen which can be challenging with legacy apps. The full keyboard and touchpad on the Chromebook in addition to the screen size finally won me over.

Since the ChromeOS is based on linux, there are methods to expose the underlying Linux OS and have a fully functional laptop.  This also lead me to consider the Chromebook, and this one in particular due to it’s weight and having the most memory for the price.  There are two different methods, one being ChrUbuntu, which is a separate environment, and the other Crouton, which leverages the underlying OS and sets up a chroot environment for the me to work in.  I chose Crouton, because I can seamlessly switch back and forth.  For my particular install I chose Terry Britton’s post, http://terrybritton.com/copy-and-paste-crouton-linux-on-chromebook-commands-959/ since it gave me the commands to install the various desktops.  I am currently running LXDE, one of my personal favorites, but I will likely test out gnome, and if it works well I may switch.

I am still pretty new with the Chromebook, but I have been using the Chrome Remote Desktop app which has worked pretty well.  It is a little more bandwith intensive than some, but allows me to access my work laptop from anywhere, and with the larger 14″ screen it is quite an improvement over my iPad.  Having the mouse helps as well, and the keyboard response is nice.

I am not typically one for product reviews, but it is nice putting something out about some of the new things I have been using, and I am really impressed with this new laptop.  As always, this is not an official HP post, but I do have to say a big well done to the product team at HP.  This is something I am proud to carry, and as a geek, I can say it is fun to talk about how I had it in developer mode within 30 min of opening the box.  I have also received an HP Slate 7, so I plan to write a little about that soon, mostly comparing it to my iPad, and giving some thoughts on iOS versus Android.  I do love new toys, and these are certainly helping me be more productive at work.

HP Chromebook 14 LTE

VMware Storage Part 8: Storage Networking Revisited

Storage networking is a topic that could easily descent into deep religious debate, but I often get questions from customers and partners such as what does it matter in a virtualized environment, if we are virtualizing, why should we care what the storage network looks like. The specific question more recently was around 1GbE iSCSI versus SAS, so I want to specifically address the SMB market space, but the decision points are not dissimilar.

To start with a quick look at the SAS protocol. SNIA has a great presentation on the differences between the various storage networking protocols, http://www.snia.org/sites/default/education/tutorials/2011/spring/storman/GibbonsTerry_Shareable_Storage_With_Switched_SASv2.pdf. SAS, as it points out, is not a standard, but rather a way of conveying unique sas attributes. This is yet another way, primarily in highly dense server scenarios, to present shared storage. Essentially it is a way of sharing out direct attached storage. The main draw here is the speed over the 1GbE iSCSI. Since SAS is generally at 6Gbps, and can run over 4 channels for 24Gbps.

The main challenges for SAS focus on deployment and cost. It is often looked at as a cost saving measure over Fibre Channel, high speed and a lower cost. The challange here is that it is fairly limited in it’s scalability. It also introduces some complexity not found in iSCSI. Bringing in new switches, and zoning them is reminiscent of fibre channel, which is far more scalable.

iSCSI is not without it’s challenges of course. There is the consideration of using separate physically isolated switches from the remainder of the network, or using VLAN tagging on existing switches. 1GbE iSCSI can be saturated given enough utilization, and proper design is critical to minimize latency.

So to answer the question, what does it matter, the first response is is it supported. VMware publishes an exceptional Hardware Compatibility List, http://www.vmware.com/resources/compatibility/search.php, which should always be the first stop on these decisions. Secondarily to that, know your environment. While Switches SAS does have it’s place, at this point, in the SMB environment, it often makes sense to stick with what is a known quantity. Every environment already has an IP network, so leveraging that, or extending it is the simplest way of moving forward. This keeps the environment standards based, and does not require sticking with a specific solution. At the end of the day, beyond what is supported, the best design principle is to keep everything simple. While it may not matter as long as it is supported, generally speaking, then best designs are the ones which are well documented, easily repeated, and simple.

As always, there are exceptions to every rule, but I would say that using iSCSI is preferable over SAS for all those reasons, why make things more difficult than they need to be.

VMware Storage Part 8: Storage Networking Revisited

VMware Storage Part 7: P2000

Moving on from the more general VMware storage topics, I think it is good, since I work for HP, and since I spend much of my day designing HP storage solutions for virtual environments, to talk a little about the different models, where they fit, and why it is good to have more than one storage system to look at when designing for a VMware environment.

The HP P2000 family is now in it’s 4th Generation. This is HP’s entry level SAN, solid performance, a standard modular design, and an easily learned interface that is quite intuitive. This is an excellent platform, and not just for small businesses. The simplicity of design scales out very well for users with a middle ware layer, such as VMware to manage the multiple arrays.

The biggest draw of this device is the variance between connectivity methods. The P2000 allows for SAS connectivity, either direct connected or using a small sas switch, 1GbE iSCSI, 10GbE iSCSI, and FC. There is also a combination controller allowing for iSCSI and FC in the same system. This level of flexibility enables environments to be designed around multiple protocols, or in smaller environments to take advantage of less costly protocols.

The user interface on the P2000 is very simple and functional. Provided the user understands some basic server terminology, the P2000 can be configured, and even snapshotting and replication are easily provisioned. The concepts around this are a pay per array system. If you want snapshots, or replicaiton, you license the array rather than a per TB charge. The system can be administered through a user friendly web GUI, a robust CLI, or by using plugins in VMware vCenter.

The only real downsides to this system is the small amount of Cache, and the limited feature set. For the most demanding of applications and users, this might not be the best fit simply because they are going to want to leverage the larger amount of more expensive DRAM in higher end arrays. This can be mitigated by I/O accelerators on the server side, or by scaling out with multiple systems, so it is not a huge problem. The limited feature set, again is not always a bad thing. It is critical to understand what is needed from the array, and to plan accordingly. For example, if thin provisioning is a critical success factor, this might not give you the same level as a 3Par for example. On the otherhand, if cost is the biggest factor, and you have a constraint of using a 1GbE iSCSI network, this is a perfect fit.

Another option not often considered with the P2000 is, while it is a block only array, it can be paired with the HP Storeasy File Gateway, to provide file services with built in deduplication, and a familiar windows interface. What does this have to do with a VMware environment?!?!?! It has been my experience that many VMware environments run primarily Microsoft Windows. This means that windows file shares are quite important, and overlooked.

In a VMware environment, this is a great shared storage system. It is easy to administer, it is a good value for the price, and it does enable many of the VAAI features available with VMware. Additionally this is one of the most flexible systems on the market. When you absolutely need SAS or 1GbE iSCSI connectivity, this is always a great fit. At the end of the day, there is a reason why companies like HP have multiple storage offerings, and this one is exceptional in it’s space.

VMware Storage Part 7: P2000

VMware Storage Part 6: Object Storage

Object storage is starting to come back around recently with services like Box.net, and Dropbox to name a few. It is really a simple concept, which can be important in a virtualized environment, especially when you look at the industry trends.

Object storage is really quite simple. You store a file, not as blocks on a file system, but rather as an “Object” on a raw device. This is not so much different than when a database is given a raw mount point and manages the storage internally. The concept is that I place an “object” in the store. I define a policy that I want it protected in R1/R10. This means when I create the object, or modify it, the system needs to keep two copies, generally on separate physical hardware, and in some cases in separate geographies.

This sounds very much like the old Lefthand, now StoreVirtual concept, where data is constantly replicated to keep multiple copies. The major difference is that the storage is accessed via an API, application programming interface. Think about drop box for a second. I place a file in my drop box folder on my laptop. It is replicated up to the “cloud”, and is available on my iPad, iPhone, web browser, etc. I didn’t tell it what do do, I just placed a file in the folder and it was suddenly available everywhere I am. I can have the same object on a Mac as a Windows PC without worrying about incompatible file systems, permissions, many of the things which are inherent to the file storage models. The great part is I don’t worry about backing up the file when I image my laptop because it comes right back when I link my freshly imaged laptop to the application which is calling the API.

What is happening here is that programatically, storage is being controlled, the object, my file, is being replicated. I don’t think about anything going on in the back end, it is simple, and it is cost effective. I am using the object storage as a repository, so there is not a performance expectation.

The performance in and of itself is an interesting discussion point. This is not something I am going to run my virtual desktop environment on, at least not just yet, but it is a great way to throw some large inexpensive drives out and present them to my end user community as a way to store data. This enables me to archive more effeciently, and share files across heterogeneous environments.

So what does this have to do with VMware? When we design a VMware environment, we often look at just the virtual infrastructure. We think about todays statefull applications, and we assume virtualization will just simplify our lives. It is critical to take a step back and think about what changes with a virtual environment. If we are simplifying and consolidating, we are also introducing new complexities, and potentially opening the door to a whole new series of challanges. Using our expensive primary VMware storage as an archive platform may not be the wisest solution, having a windows VM chugging away on our virtual environment may be much more costly.

One of my favorite sayings is, there are nine ways to skin a cat, no offense to cat lovers, and as we design VMware storage solutions, it is critical to look at the project on a larger level. As I learn more about Openstack, I will be posting more on the Software Defined Datacenter and what I believe that really means.

VMware Storage Part 6: Object Storage