State of the market – HCI

Not everything is as simple as it appears

From time to time it helps to step back and dissect a topic of interest, particularly one that has a lot of attention and spectacle laid to it with every passing day. While HCI or Hyper-Converged Infrastructure isn’t exactly anything new for those in the tech world, consuming those resources effectively among silo’d business units is new. When a piece of hardware, no matter what it is, begins to transform and become consumed through software there are fundamental shifts that must take place in order for that consumption to be effective not only for the business but for the operations and developers. For example, cloud computing in its native form has ushered in a new era where operations folks don’t have to worry about blinking lights, network cables and power requirements. This shift in thinking has left some in the dark as to whos responsibility “x” service becomes now that it is software and not hardware.

HCI isn’t about hardware/software; its about changing consumption models.

HCI is no different

By my last count there are roughly 50 different “hyper-converged” and “software defined hyper-converged” offerings on the market today. Whoaaaa!!! That can get complicated very quickly, even for seasoned guys like me. Just try and keep up with just a third of those companies and it is exhausting. The point of this is that hyper-converged offerings in their purest sense aren’t going anywhere. In fact, they are just beginning to┬áhave drastic shifts on the way enterprises consume resources and in time this type of consumption model will take shape for most of the market. It’s no great prediction to say that white-box servers or COTS coupled with software for the abstraction layer will be the dominant consumption model going forward. Why? Simple. Economies of scale, no snowflake servers and the process is repeatable.

What does this mean for resource consumption for an organization that has historically consumed resources through a silo’d, monolithic approach? Simply put, it means times are changing. No longer can the network team simply own an entire portion of the datacenter and not care about the server, storage and abstraction layer. The new mantra in IT is “mile wide and inch deep” meaning you should know the entire stack well enough to run an entire HCI appliance that hosts an app with a small team. This team has historically been no bigger than a two pizza team (meaning two pizzas at most to feed whoever is running your ops). Being able to see, interpret, optimize and amortize an environment for best practices around your organization is very sought after skill that few seem to understand.

Recently, I met with a customer to discuss the state of the HCI marketplace, where EMC fits into this and why “x” solution would be a better fit for him than “y” solution. What I quickly found out was he seemed to treat whatever solution whether that be “x” or “y” the same way he treated individual storage, network, compute and abstraction. His mind hadn’t been able to grasp the fact that he didn’t need to worry about connecting all of these components together, updates, patches, auditing, etc. While I certainly don’t fault him for this, it’s a trend that I continuously see. Individuals, even those at the executive sponsor level are having a hard time figuring out how to “consume” these new offerings and truthfully I don’t blame them.

Where do we go from here?

To all those wondering how, what, why and so on, please know you are not alone. This paradigm shift in the way resources are consumed, brokered, amortized and automated is a change for everyone. We have been shackled by the past and our decisions around separating these core components for whatever reason. Fear not, help is on the way if you keep an open mind and DON’T FEAR CHANGE!!!

R.D.

Leave a Reply