Chris Buckley

Chris is never satisfied with just achieving success, he is constantly looking for the next edge and is always focused on business outcomes, in a commercial context. Having solved perplexing problems within the most complex global environments for nearly two decades.

Since joining us at Virtual Clarity he has developed distributed systems platform strategy, and served as lead architect for global enterprise private cloud.

What you might not expect is that he combined being a part-time racing driver with being the founding co-chair of the Open Data Center Alliance’s infrastructure workgroup (he still races by the way). Chris is guided by a passion for identifying the right problems to solve from a business perspective, feel free to challenge him.

View articles by Chris

Hybrid Cloud: Standardise Everything (Part 1 of 2)

Quite often when we talk with clients about how they view Cloud and the type of shift they want to make in their approach to IT, the conversation turns to hybrid cloud.

For many of our clients, this is initially seen as a platform, consisting of both on-prem and public cloud capacity, that has a single overarching orchestration layer to deliver a consistent technical interface to those different types of capacity. i.e. the creation of a hybrid cloud is essentially a technology engineering and deployment challenge that needs to focus on standardising interfaces across providers.

At Virtual Clarity, we think of it somewhat differently, but let’s start by digging into this technology oriented view.

In enterprises that are not able to go “all-in” on making use of Public Cloud (i.e. most enterprises) a model that looks the same on-prem as in public cloud looks very attractive both as a way to gain experience of using cloud environments in-house and to ensure that when workloads move to external capacity that experience is still relevant (and any tooling bought/built remains viable).

So now let’s look at different ways a this technology platform can be delivered:

1- Standardise on the same cloud platform technology internally as the cloud vendor uses – viable for a limited set of vendors (vCloud Air/VMware, Openstack based offerings and Azure/Azure Stack) BUT of these Azure is the only one that really extends into the PaaS arena and, even then, for on premise hosting Azure Stack only offers a subset of the full Azure service catalogue (and at the time of writing is still only preview). A variation on this is the announced, but not yet GA, availability of VMware services hosted in AWS. 2- Deploy an emulation layer on top of the Private cloud to present an API/UI that is consistent with that offered by your Virtual Private Cloud provider. Eucalyptus (AWS – but only EC2 and S3 API) is the obvious 3rd party example here, but it seems to be withering on the vine somewhat. Alternatively, roll your own… 3- Build a standardised façade over the native interfaces of both the on-prem and public cloud platforms. Most infrastructure orchestration software vendors push this solution. Unfortunately, whilst this offers a lot of potential flexibility it also typically requires a lot of effort to encapsulate the APIs for each of the services you want to present to consumers. This abstraction isolates consumers from vendor specifics and reduces the risk of lock in – but at a cost. Experience suggests that going this route requires significant investment in ongoing engineering effort and/or ruthless focus on limiting the in-scope services. 

4- Deploy a “cloud in a cloud”. I have seen several organisations pick systems such as Open Shift and then deploy those across their internal virtualised capacity and public cloud capacity as a way to provide consistency and abstract from the hosting provider. However, they then have to manage a complex software stack just to provide infrastructure services.

So this type of technology standardisation is certainly possible – though with some fairly clear limitations. Do those limitations matter? Well, how you start depends on where you want to go.

1- Cost reduction – if your aim in moving to cloud is to reduce cost by leveraging pay as you go pricing then investing in on-premise capabilities to deliver hybrid likely won’t help with reducing cost. It might help smooth the way, but that investment will just making an overall saving harder to achieve.

2- Agility – if your aim is to help deliver enhanced agility for a core set of cloud services (typically through rapid provisioning of servers, storage and, perhaps, datastores) then hybrid absolutely can help, not least as, by presenting a consistent interface, developer teams are now able to build their infrastructure manifests and deployment scripts once rather than for each environment. Reuse is king! Unfortunately, the price of that agility may well be an extended design and implementation period. So if you want agility in 2 or 3 years, that’s fine. If you want it now – not so much…

3- Innovation – if, however, you want to drive innovation by riding on the back of the big cloud vendors’ ability to deliver new services and functionality at pace, then anything that limits access to the full set of services (and improvements to them) is not going to endear you to your consumer base. There does not seem to be a hybrid model that works well here.

Ultimately if you are willing to limit the scope of services your consumers can make use of and/or take your time getting there then this type of technology interface standardisation approach can work. But in many cases your time, effort and money may be best spent elsewhere.

So how do we at Virtual Clarity see hybrid? Stay tuned for part 2 of this article!