Understanding Containers and How Your Business Can Benefit
Those of us in the IT world have been hearing about the notion of ‘containers’ popping up a lot lately. Whilst containers as a concept has been around for many years, the current surge of adoption has largely been driven by the existence of Docker (a way of containerising and running containerised software) and, more recently Kubernetes (a way of managing containers at scale). Global enterprises, as well as start-ups have started adopting containers to speed up their Business As Usual processes. We’ve had a lot of client interest in integrating containers, so you may be wondering why are they resurfacing now, what exactly are they and are they beneficial to your business?
Let’s take a closer look.
Straight Talking: What Are Containers?
At its most basic, then, a container is a way of packaging software into a single artefact that can be deployed to a (potentially) shared OS instance and the software being run without needing to unpack the container first. Think of it like a shipping container holding bulk information together. It can easily be lifted up, moved and placed on top of additional crates and its contents remains unchanged.
Why is this good? Containers can help reduce your need for space and helps to drive your utilisation up, saving time and money. In addition:
- Because they’re small and tightly bound, they can rapidly and safely scale applications - saving resources.
- They’re pre-packaged and tested, and don’t require configuration or installation, which saves time and increases productivity.
- As long as you have the same operating system, they can be started and deployed quickly and consistently. They can also be terminated quickly.
Clear Thinking: Containers Are an Easy Fix, Right?
Containers can be massively helpful when implemented properly and when the appropriate use case is there to ensure effectiveness. However, in order to be able to run containers at scale, there’s a lot of things to consider in the back end in order to set them up for success. It requires thought.
Any time you introduce more of something new, you add in more risk, but you can manage that risk. For example, if you have one car on the road, the chance of accident or risk is low; if you have 10,000 cars on the road your chances are higher. Avoiding an accident isn’t impossible but there’s now more cars around you need to be aware of to prevent it from happening.
A lot of clients don’t realise the complexity of software that lies beneath containers to make it all work seamlessly together. Most large enterprises have many legacy systems, making fast robust change difficult. Software installation required a software distribution system and accompanying packaging scripts to perform the application installation – that is if manual installs were not performed. Now containers permit standardisation for application deployment from the application build with its run-time dependencies. But that needs to be exploited by greater organisation and automation – you should deploy a version management repository, for example.
The archetype with containers allows the company to identify its principle applications and divide those into components. The development team assembles its code base into its container, running through a test and QA cycle by deploying the container to each environment in turn before promoting the latest version into its management repository. There the deployment of updates and rollbacks takes place over the container engines on the virtual machines created in the servers. 10 applications with 10 components with 10 versions with 10 containers per container engine and 10 virtual machines per physical server gives a potential 100,000 configurations – and yet we would think of those numbers as modest in size. So even in a fairly simple world, you could end up with hundreds of thousands or millions of configurations in a dynamic environment. No human has the capacity to be doing manual configuration. Not only do the ways of working change to manage demand, but it has to change because of the potential scale, and businesses need to be ready for that. Note that just having a container platform, and using containers, doesn’t in itself create any fundamental change to how you approach operations – that must be planned and implemented as an addition to the new technology. Formalising your RACI here is key.
People think containers are just something else to run, but how does it change the way you manage services in an environment?
A fully functioning, multi-server, container platform that is manageable at scale is fundamentally a complex thing that has a heavy reliance on policy driven automation. Of course, in a “containers platform as a service” (CaaS) environment (e.g. on AWS, Azure, GCP, etc.) someone else may be responsible for building and operating that platform, but it still exists
When you think of classic IT management the question usual is: “what server is this specific IT application for?” That scenario is no longer applicable in the world of containers. At scale, you don't necessarily know which server your container is on, and in some regards, it doesn’t really matter. And a container may move from one server to another on a regular basis, allowing the underlying platforms to be maintained to the correct level of “health”. Container platforms are complex in their own right, though the building and running of them may be “someone else’s problem”. We’ve successfully implemented thousands of containers in large enterprises, let us help.