New year = new perspective
As I start a new year, I have to reflect on the ideas of the past such as having distinct silos for our applications, business units, data, etc. We even do this in our personal lives for those of us that have multiple cell phones for personal vs. work. Why do we feel the need to have different and disperse ecosystems for our resources when in all certainty most of us and our resources can play nicely together. Think about it for a second, would you separate your cell phone bills for your family into four or five different accounts? Would you have different plane tickets for your family on different flights all to end up at the same location? Probably not. So why do we feel the need to not treat our datacenters, applications, infrastructure and automation engines as silo’d, disparate entities that are bound by the capabilities that won’t allow a datacenter to run and shift like an iPhone?
A datacenter is basically a mixture of compute, networking, storage, applications, power, etc under one roof that serves the purpose of making sure customers, partners and users can access the needed resources when and where needed. Taking this idea further, a cpu in a bare-metal server is the same as a cpu in a virtual server and is even more closely resembling a cpu that sits under a container in a Docker or CoreOS environment. Once you understand that a cpu is a cpu, memory is memory and storage is storage, you can begin to modernize, automate and scale your resources as dynamic pools instead of silod and disparate piece parts. A server going down, whether it is physical or virtual, shouldn’t require a sys admin or a NOC to be paged. The resources sitting in the data center should be able to fail gracefully and automatically, requiring zero human interaction. Google, Facebook and Amazon have all figured out that infrastructure should be treated as code and as we all know code is able to handle failure gracefully and without human interaction.
Code for all!
Thinking about these types of new cloud native application stacks, micro services and utility based consumption models, it’s easy to see that data centers of the future will employ a fail fast and fail often mentality. In hindsight as many cloud pioneers such as Warner Vogels have stated, injecting failure into your application makes it more resilient since the best way to know if your application is able to withstand failure is to put it into production. Nothing works like production! Mesosphere, just to name one, has done a fantastic job of automating the data center, coincidently naming their OS the “datacenter operating system” or DCOS. DCOS is literally an operating system for your data center that allows you to look at the cpu on your MacBook as if it were a cpu in your Intel x86 server sitting in a colo. With this type of mentally, Mesosphere has now been able to commoditize clouds both on prem, off prem and/or a mixture of hybrid resources. It’s not coincidence that the Googles of the world figured this out more than ten years ago and now in conjunction with solutions such as Mesosphere, enterprises and SMB’s are now able to start small, scale infinitely and not worry about where, how and when a workload is moved, deployed, replicated, etc.
How have you begun automating, abstracting and pooling resources for your data center?
Are you deploying across multiple clouds? If so, what scaling challenges have you faced?