Data Centre Dynamics Magazine speaks to our CEO, David Trossell about Multi-cloud as a solution.
February 20, 2018
Multi-cloud: the next step in cloud computing?
Multi-cloud is said to be the next stage in cloud computing, but is it? David Linthicum offers a definition in his article for Asia Cloud Forum, What is multi-cloud? The next step in cloud computing: “’Multi-cloud’ means using more than a single public cloud. That usage pattern arose when enterprises tried to avoid dependence on a single public cloud provider, when they chose specific services from each public cloud to get the best of each, or when they wanted both benefits.”
He then asks how multi-cloud relates to hybrid cloud, while stating that cloud model names revolve around patterns of use (e.g. public cloud, private cloud, and hybrid cloud). In his view, many people are using multi-cloud and hybrid cloud interchangeably. Yet, he emphasises that both cloud models have some distinct characteristics.
Hybrid cloud, for example, is commonly used as a term to describe when a public cloud is used with a private cloud. “If you use multiple public clouds with a private cloud, that is still a multi-cloud. (Some people might call it a hybrid multi-cloud, which is fine),” he explains. However, a multi-cloud architecture typically uses two or more public clouds.
Multi-cloud aims
Essentially, the aim of multi-cloud is to avoid any reliance on a single public cloud provider.
That makes sense if the data is being mirrored between the different cloud service providers to maximise uptime whenever disaster strikes – so long as the data created by an organisation is frequently backed up.
To perhaps confuse matters more, he reveals that there is “also a beast called a pragmatic hybrid cloud, which is the pairing of a traditional enterprise data center with a public cloud; these exist because many enterprises have been disappointed with private clouds and so sought a way to combine what they already had with the public cloud.”
Increased complexity
So, you can see how cloud computing is becoming more complex. Initially, the idea was to place workloads into a single cloud – public or private. Yet the attractiveness of the hybrid cloud offered enterprises more choice. He says this led Google and Microsoft to develop “compelling public cloud platforms, providing alternatives to Amazon Web Services, which had started the public cloud business.” They were followed by other enterprise cloud providers, such as IBM, HP Enterprise, and Oracle. However, he thinks these latter players were less successful than Google and Microsoft.
They are all nevertheless, as he claims, viable cloud options. So, enterprises then choose to mix them together. This has been achieved through formal architectural processes and through shadow IT. “Various shadow IT efforts often picked different public clouds and then wanted those cloud operations to be managed by enterprise IT”, but they were often adopted without the authorisation and knowledge of their own organisations’ IT departments (thus the term, ‘shadow IT’).
Enterprises have arrived at the multi-cloud model through several routes though, and so Linthicum claims that most enterprises now manage a multi-cloud infrastructure. He adds: “Although many IT organisations simply manage these complex multi-cloud environments using the native tools and services from each cloud, a few are getting smart and abstracting themselves away from the complexity.”
Reducing complexity
To reduce this complexity, they are using cloud management platforms (CMPs) and cloud service providers (CSPs), or cloud service brokers (CSBs). They can then manage their multi-clouds as one single cloud. Yet this means, he explains, that they can only use a “subset of features from each cloud; that is, take the “least common denominator” approach.”
He therefore advises you to focus on what cloud technologies do, and not on what they are called. “It’s a fact that cloud architectures will evolve in the new few years, and new patterns will emerge as well. New names will come, too, I’m sure.”, he comments.
Nothing new
Furthermore, it could be argued that this concept is nothing new. In fact, there is very little new in IT – as the underlying technology improves, it is now possible to bring back technologies and techniques from the past and re-purpose them with new branding. So, let’s remember that the cloud used to once be referred to as application service provision, or on-demand computing. So, when cloud computing as a term first achieved common usage, even some web-based email providers started to claim that they were cloud services providers.
Multi-cloud, therefore, feels a bit like the electricity supply industry to me, where people shop around for the best offer. However, there is still a reliance on Microsoft at a lot of companies. Microsoft itself aims to push everyone to go the subscription model in the cloud – and Microsoft Azure is the destination – but should all the low-cost application be in one cloud or spread they be spread across the different clouds such as those offered by Google, Amazon AWS, and Azure. Moreover, there’s also the IBM Cloud for all those organisations that are still running iSeries (AS400) applications.
Cost-benefit analysis
So, in my view, there needs to be an assessment of the costs of cloud downtime. I’d also ask myself the question: Does that cost more if I deduplicate applications and data across cloud locations of the same provider or different cloud providers? Using a multi-cloud should make financial and operational sense. There must be a business case for it, otherwise it won’t deliver any type of efficiency savings.
It’s also important to consider the impact that network and data latency will have on each type of cloud. Where there is a need for private access, low latency can provide for time-critical applications, such as databases that need to be behind security walls. Yet, for other applications in the same organisation, which are open and less time critical for remote or public access, public cloud is the way forward.
So which cloud model should you adopt and embrace? Take cars as an example. There are different car models for different functions. Most families that live outside major conurbations have a number of variants: small commuter cars, people carriers such as SUVs for the family, etc. Even those that live in places such as London with a myriad of public transport options. .owning a car there is often unjustifiable and yet you will still see people that own their own vehicles. Yet there will always be people who’ll hire a vehicle from time-to-time, take a taxi or public transport. So, in the same way, there isn’t a single model that fits all aspirations and requirements. The same principle applies whenever an organisation selects a particular cloud model.
Ideally, this means that there should be a cost-benefit analysis based on needs to ensure that the right approach is adopted across the enterprise. This should include some analysis of business and service continuity, as well as of disaster recovery. For example, if an organisation overly relies on one single provider, it will be at a greater risk whenever outages occur.
So, it would be wise to therefore employ more than one cloud service provider to minimise risk. Furthermore, there should be more than one data center and disaster recovery site – located far apart to ensure that they don’t share the same circles of disruption. The key is to spread the risk to maintain high levels of uptime.
Service levels
I also find that there are many cloud-only companies that worry about the service levels offered by the cloud service providers offer. To mitigate these Service Level Agreements (SLAs), many organisations, just like their traditional data centers, use multiple clouds – be it the same provider or different providers to mitigate the effects of outages and spread the risk.
There are now several multi-cloud management tools on the market that facilitate the management of data and computing across multiple heterogeneous clouds. However, these overlook one factor: how data can be moved between these clouds in a speedy and efficient way. To achieve this there is a need for WAN data acceleration, and this can be offered by such solutions as PORTrockIT.
Deployment tips
My other tips for deploying the multi-cloud model include:
- Use a single pane of glass management layer to manage across the clouds
- Don’t forget about the performance requirements of moving data between the clouds
- Ask yourself: ‘What is the impact of latency on your application for in-house use and cloud users?’
- Remember that data must be encrypted as it flows around the clouds, and this can’t easily be achieved with traditional WAN optimisation tools.
Next steps
As the adoption of cloud moves into more and more organisations along with the growth in unstructured data, there will be a move towards object storage. This will not just occur in the cloud, but as the standard for on premise storage. Object storage removes the limitation on a file systems name space that traditional file systems have. So, multi-cloud computing may spread risk, but it may not necessarily be the right approach for every organisation. data centers, going forward, will therefore need to offer a range of cloud services to meet the needs of each of their clients.
David Trossell is CEO and CTO of networking software specialist Bridgeworks Ltd