Chris Buckley

Chris Buckley

Chris is never satisfied with just achieving success, he is constantly looking for the next edge and is always focused on business outcomes, in a commercial context. Having solved perplexing problems within the most complex global environments for nearly two decades.

Since joining us at Virtual Clarity he has developed distributed systems platform strategy, and served as lead architect for global enterprise private cloud.

What you might not expect is that he combined being a part-time racing driver with being the founding co-chair of the Open Data Center Alliance’s infrastructure workgroup (he still races by the way). Chris is guided by a passion for identifying the right problems to solve from a business perspective, feel free to challenge him.

Connect with Chris.

Service Management in a containerised environment: The nature of containers and the tech needed to run them

​At Virtual Clarity, we are often asked if the use of containers by enterprises drives a need for change in the approach to service management.

In this series of blogs, which will be going up over the next few weeks, we seek to answer this by considering:

  1. The nature of containers
  2. The tech needed to run containerised applications
  3. The types of applications that run well in containers
  4. The impact of containers on service management

Let’s start by going back to basics and addressing the nature of containers themselves.

The nature of containers

In a virtual machine (VM) world, such as VMware, a single piece of physical hardware hosts multiple, essentially independent, operating system (OS) instances that are isolated from each other but have the overhead of running an entire OS each. With containers, the underlying physical server runs a single operating system instance and the container technology makes that look like a separate OS instance from the perspective of the software running inside each hosted container.

Each container on a single host shares a single OS instance. Given that running an OS instance has its own overhead, limiting it to one per physical host allows containers to make more efficient usage of physical resources than VMs but at the expense of rather less comprehensive workload isolation.

So, in simple terms, a container is a way of packaging software into a single artefact that can be deployed to a (potentially) shared OS instance, the software thus being run without having to unpack the container first.

Now that we understand containers a little better, we can consider the technology required to run containerised applications and the characteristics of applications that are a good fit to run with containers.

The tech needed to run containerised applications

A fully functioning, multi-server, container platform that is manageable at scale is a complex thing and relies on policy driven automation heavily. Of course, in a “container platform as a service” (CaaS) environment, for example, on AWS, Azure, GCP, etc, someone else may build and operate that platform, but it still exists.

This platform has to deal with a range of tasks such as:

  • managing a library of containers ready for instantiation
  • deploying from the library when needed
  • placing appropriate resources to run the container in question
  • responding to events as driven by policy (e.g. scaling out through instantiating more instances)
  • assigning network addresses to new containers
  • tidying up after a container instance is shutdown

In more complex environments the wider platform may also have to deal with the provisioning and entitlement of PaaS services such as databases, dynamic configuration of security rules, etc, on top of whatever is managing the actual physical hardware and host OS.

The flip side, however, is that interaction with application software is now potentially much simpler. Development teams deliver tested containers and, in the simplest terms, they either work because they resolve all local software dependencies, or they don’t. This shifts the responsibility for application software library management (and conflict resolution) away from whoever runs the OS to whoever is delivering the application container.

A separate, though attractive, use for containers is to provide local (i.e. on developer’s desktop) access to standard development tools, etc.

So now that we have discussed the technology required to run containerised applications, we can consider the characteristics of applications that run well with containers and, in our next blog, answer the original question “Does the use of containers drive a need for change in the approach of enterprises to service management?”.

At Virtual Clarity, we’ve successfully implemented thousands of containers for large enterprises, let us help.

Related Reading: