Containers & VM’s – which to choose?

Containers are all the rage or are they?

For many, the concept of a container is something new. While containers themselves have been around for quite some time, only in the past couple of years has it went mainstream. As we constantly look to shrink our datacenter footprint, deploy applications faster and iterate quicker, containers are simply the next logical step. However, what is often times either forgotten or misunderstood is the real reason why we deploy containers.

When I first saw a virtual machine created, deployed and then vmotioned to another host I couldn’t believe it. I knew at the time, circa early 2000s, that this piece of technology would revolutionize the way we consume servers, design datacenters and consume resources. Density among cabinets dramatically increases as we were able to run hundreds of virtual machines on a half dozen physical servers. A reduction in power, networking and cooling were just a couple of added benefits of virtualization.

With all of these added benefits why would someone need to pack and condense more resources into the same amount of space that once housed hundreds or thousands or virtual machines? Haven’t we reached the limit of what we can pack into what? Are containers about consolidation, mobility or agility?

The answers to these questions can vary greatly depending on what LOB you ask, the particular reason they are looking at adopting container technology or who the influencer is within a particular customer. We have to remember the reason we build datacenters to begin with. The application! We build datacenters to deploy applications that run revenue generating processes that keep us employed. The application framework, layout and process vary greatly from customer to customer but the underlying reason remains the same. Deploy applications faster, iterate quicker and provide the developers a first class environment where they can consume resources in a scalable fashion much like consumers are used to doing with electricity, water, cable, etc. If we need more electricity or water we simply plug in more lights or turn on the water spout for a greater duration of time. These resources are on demand and available to us any time of day, typically.

The next iteration of development

In case you have been under a rock for the past couple of years, speed and scale win out just about every time when it comes to software development and pushing a developers code to production. While I haven’t been a developer for quite some time, the concept of having something CoreOS on my laptop running inside VMware workstation spinning up multiple containers, pushing and pulling from a repo like GitHub and collaborating with my colleagues in real-time makes the SDLC much more frictionless, cuts down on time to deploy and creates a repeatable process. A sample from a CoreOS instance running with a container fedora, the size of the container and when it was created.

CoreOS instance running a container from CLI

Containers aren’t meant to displace virtual machines but rather usher in yet another way to develop applications faster, create a repeatable pattern as well as a DevOps culture.  VMware, for example, has released several projects focused around adoption of containers in a virtualized environment such as VMware Integrated Containers (https://blogs.vmware.com/vsphere/2015/10/vsphere-integrated-containers-technology-walkthrough.html) and Project Photon (https://blogs.vmware.com/cloudnative/introducing-photon/).

How have containers been used in your environment?

Are you using containers today? If so, in what fashion?

R.D.

Leave a Reply