Bridgeworks CEO, David Trossell speaks to government Public Sector Journal (GPSJ) on how there are no quick wins when it comes to cloud storage. It’s always best to throw caution to the wind to ensure that you understand the requirements of your public sector organisation by auditing what you already have, to ascertain what you may need. There is also a need to fully comprehend services such as AWS, Microsoft Azure, and Google.
Summer/Autumn 2019
However, Kurt Marko, writing for TechTarget, says organisations “often displace existing storage systems in need of a technology upgrade with an easily-understood service that provides a distributed, high-availability infrastructure; usage-based pricing; and built-in security inherent to all cloud services.” Murko continues by arguing that the enterprise hybrid cloud next stage is to link cloud services with existing applications to provide an extension to any existing on-premise infrastructure.
He explains: “This integrated hybrid cloud storage approach requires a seamless interface between the private systems and public services, with data continuously synchronized between the two. The goal is to make the cloud an extension of enterprise capacity and a staging area for applications to use more advanced cloud databases, data warehouses, analytics, and machine learning services.”
Innovation
The trouble is that some organisations are quick to innovate, but slow to implement. In some sectors, such as banking and financial services, they have traditionally been slow to innovate and subsequently slow to implement new technologies such as the public or hybrid cloud. Security concerns have largely dogged the implement of public and hybrid cloud in these sectors. Yet, nowadays the cloud-first mantra that first appeared in 2013 is swinging the pendulem towards a “let’s move everything” policy.
It’s a fact of life that data is never in the right place. There is often not enough space for it too, and there are also cost, location, regulatory, and data transfer speed concerns to address. As well as all of these issues, organisations often have too little data duplication. Over the years, there have been many attempts to resolve this through tiering, hierarchical storage and through information life management. At the end of the day, data storage is always a compromise.
Over the years, there has been an increase in CPU performance, memory system and in memory storage. However, the problem of what to do with data, how to manage and store it equates to a conundrum that leaves organisations moving the issue down the line – like freeway. There will be at some point a need for more and more storage. Protecting data inflight and at rest, also costs, yet someone has to pay the penalty at a time when practically everything is cloud-based.
There is therefore a need to consider cloud security, how to expediently back-up data andensure its availabilty, as some factors such as outages are no longer under your control. So, there is the potential risk of upsetting lots of people just a bit, or upsetting a small number of people a lot. This may occur by creating your own hyper-converged on premises cloud. Help to achieve this is available from OEMs. Beyond this, everyone is still talking about it as new technology, when they should perhaps be talking more about the functionality of hybrid cloud storage.
This way or no way
There is a deep passion that accompanies new technology that it is “this way” or “no way” and if you don’t, your competitors will leave you behind. The cloud is a prime example of this. After all the talking and hype, it took about 3-4 years for companies to start implementing solutions and to then find out which of the promises it did and did not fulfill.
Many organisations adopt a cloud-only strategy in which every aspect of their IT resides in the cloud. There are enterprises that have only put the customer-facing aspect in the cloud, and those that have kept everything in-house, but have created their own private cloud. Of course, there are many flavours. Yet, to moderate the all or nothing camp, and to overcome some of the limitations when using the cloud, the concept of the hybrid cloud storage has evolved. The whole concept of the hybrid cloud is that it acts as a seamless extension to your existing on-premises storage with no changes to the applications.
Latency and packet loss
One of the issues that many implementations face is the failure to take into account the effects of latency and packet loss on the performance of the application(s); be this from the user to the cloud or the data centre to the cloud. This is especially true when organisations look to use the cloud as storage facility in such applications as Disaster-recovery-as-a-Service (DRaaS), Archive-as-a-Service (AaaS) or as storage tier in an HSM scenario.
In the past, many organisations made use of the cloud gateway concept. The was a device that acted as a cache device with local storage that acted as a holding point whilst the data was de-duped. This was then trickle-fed into the cloud. The issue with these devices was, whilst they ingested data quickly, the transfer to and from the cloud was painfully slow – not what you want in a disaster recover situation.
Another major flaw in these products was working with compressed and encrypted files. These are notoriously very difficult to dedupe any further, so require transferring as is. The last and most critical issue with these devices was that they were very susceptible to latency and packet loss on the connection to the cloud, further hampering performance when you really need it during recovery of the data from the cloud.
Data transfer
Many of the applications include backup and archive products, as well as a number of storage devices, and it is now possible to transfer the data directly to the cloud in object form, bypassing the cloud gateways completely. I would like to say that that all the problems were resolved, by removing the cloud gateways, but it is just like when they build a new motorway to bypass a bottleneck – you move the problem further down the line to another location. In similar fashion, when you implement a hybrid Cloud solution , it literally is “down the line” with our old performance thieves; packet loss, and latency.
The “go-to” industry standard in these occasions is WAN Optimisation either as a stand-alone product or as part of a SD-WAN installation. However, as I have already alluded to these have serious limitations with rich file formats that use compression techniques, as well as encrypted data that we all want when we transfer data over the WAN. In fact, many companies are now insisting all data on the LAN and WAN is encrypted.
No devolution
However, using the cloud as part of a hybrid cloud storage strategy or any strategy that uses the cloud does not devolve you from the responsibility for ensuring its integrity or safety. The cloud providers provide the functionality and the capacity – the rest is still down to you. You still need to have multiple copies of your data in multiple places, either with the same cloud provider of a completely different one. It is always worth remembering that putting data into the cloud is relatively cheaper than the cost of getting it out!
Bandwidth costs
The last thing to address is the question of performance. Bandwidth costs are declining rapidly (faster would be nicer) and these are increasing their geographical reach. But unless we solve the problem of latency and packet loss, we are never going to get a real return on the WAN investment, no matter how much bandwidth we throw at the problem.
Go back to the idea of hybrid cloud strategy: it was to have a seamless extension of our infrastructure and performance, which has to be a key aspect of that seamlessness. But how can this be done when the current go-to tools only provide a part of the requirements? Well, there is a new way to transfer data across the WAN. it’s called WAN Data Acceleration.
It does not change nor manipulate the data by compressing it or deduping it, and it can handle encrypted and compressed data equally, as well as any other format. It accelerates these across the WAN, whilst mitigating latency and packet loss. This uses a mixture of massive parallelisation of the data stream coupled with AI to constantly tune network parameters, achieving in some cases 95% bandwidth utilisation.
Cloud win-win
So, to have a win-win, cloud storage can’t sit on its own; it needs to have a network infrastructure behind it to operate efficiently and effectively – whether for BaaS, DRaaS, PaaS or simply for applications and data storage. Hybrid cloud storage therefore has to be integrated, allowing for more secure data transfer and storage, better accessibility by the mitigation of latency and the reduction in packet loss with a solution such as PORTrockIT.
With a higher level of WAN performance behind it, hybrid cloud storage becomes more robust and more effective – particularly when disaster strikes, when there is a need to retrieve data to ensure that an organisation like you own can continue to operate. The focus should be on service continuity, requiring hybrid cloud storage to be located in at least three locations, beyond their own circles of disruption to ensure that business goes on no matter what happens. Data is the new gold, and so it’s worth investing in technologies that make sure it’s secure and readily available.
Click here to read the article on Government Public Sector Journal (GPSJ.co.uk).