ISE Magazine speaks to Bridgeworks Chairman Jamie Eykyn about the growing costs related to network and data management cost.
March 1, 2018
The performance of telecom/IT systems and increasing reliance on data means that inadequacies within a network can lead to expensive business costs. This article examines the Citrix-sponsored research by Tech Research Asia and discusses how these expenses can be minimized. It highlights that poor network connectivity is costing Australian companies an average of 71 hours of lost productivity per employee, per year. Estimating that a company with 50 employees would have a total cost per year equating to $144,563.
Firms in New Zealand fared only slightly better with an average productivity loss of $NZ66,399 per year for the same sized enterprise. The study found that 23% of outages affected Australian companies’ revenue source, and in New Zealand, 47% of the companies surveyed said that network issues impacted only 14% of their revenue streams.
There are other locations in the world where the situation is far worse. In certain areas, network size is limited, latency is a big problem, and packet loss is huge. Often bandwidth is still extremely expensive and therefore it is a precious commodity. Therefore, companies need to be able to utilize the bandwidth they do have to the optimum.
In summary, you need to be looking at different ways to transfer data across the network because some of the traditional methods just don’t alleviate these problems, especially when it comes to seriously large volumes of data.
Market Changes
Telcos have historically been about selling pipes and capacity, such as MPLS. This has always been their focus as they haven’t necessarily had to focus on achieving optimum performance. Ten years ago, organizations were running small pipes where the challenge was to how to get as much data down a limited bandwidth pipe as possible. The only way to achieve this was to compress and dedupe the data to give the illusion of performance. Yes, companies are now investing in reliable network connectivity, but these solutions didn’t take into account the challenge that latency presents, especially with large and encrypted volumes of data.
Now a decade down the line, we have 10 Gb pipes, and we are now starting to talk about 100 Gb bandwidth. Now, instead of running at 20% capacity, telco customers want to run at 98% capacity to ensure that they are getting the value for which they are paying. Yet the inadequacy is not the pipe; it’s about HOW you send data over it. The key issue to address here is latency.
If you are spending £100,000 ($123,671) a month on your bandwidth, you will want to ensure it is being fully utilised. The trouble is that many organisations are only using around 20% of the available bandwidth that telcos are providing. It is in the interest of the telcos to help their customers address this by advising them of how to optimise their bandwidth usage.
Underperformance Risks
According to an article in Computerworld which references Tech Research Asia’s research, the risks of underperforming networks includes the following consequences:
• Staff performance and collaboration — time to undertake activities increases and the ability to collaborate/innovate with colleagues is adversely impacted.
• Data gathering and access — namely the inability to capture data, find data, and access data, for insights and analysis. (In some cases, it can also cause loss of data — a potential minefield all of its own.)
• Customer engagement and interaction — lost sales revenue and the inability to contact or respond to staff in a timely manner
The latter point from a telecommunications company’s perspective is particularly crucial. Customers that constantly suffer poor network performance are likely to jump ship. Or, they will struggle on until the network breaks, or spend more than they should in order to attempt to optimise their existing network with, perhaps, WAN optimisation tools. In fact, The Impact of Poor Network Performance on Business Goals and Costs study reveals that 50% of the surveyed organisations said they would need either to upgrade or to optimise their network environment to deliver their short- to mid-term business imperatives.
The trouble is that customers who upgrade or WAN-optimize their network environment may not find the answer and the outcome they seek. Network inadequacies arise owing to distance and from the fact that, fundamentally, the world runs on TCP/IP (with a few exceptions).
Inadequacies arise due to latency, packet loss. People still believe that capacity solves latency, which is simply not the case. There is a common misunderstanding that you can solve your problems with a bigger pipe. If you have 60 ms of latency on a 1 Gb pipe, you will have 60 ms on a 10 Gb pipe. It’s the law of physics; you can’t change it! With a 10 Gb pipe, you have 10 times the problem because the data is still travelling at the same speed as it is on a 1 Gb pipe. The difference: you have more capacity that you’re not using, dark space, or, if you want to put it crudely, a drain into which you are pouring your cash.
A Better Way?
What will solve your problem is looking at a different way to accelerate your data across the network, and that doesn’t mean WAN optimization. WAN optimization is designed for small amounts of data over small pipes, using caching techniques. Apart from technical difficulties, it is cost-prohibitive for the heavy-lifting of large amounts of data that telcos and their customers are transporting today.
The other alternative we hear about is shift and lift. AKA the Snowmobile from AWS, or sticking your data on a UPS truck. That may just be viable for a one-off migration. Although there are inherent risks in doing that, and it doesn’t solve the problem of backing up, replicating data and, more importantly, restoring your data for circumstances where you will need to have quick access.
You need to be able to restore the data very quickly when a disaster occurs. You can’t wait 2 days for it. So, it’s time for a fresh approach, and the good news is that there are solutions already here. Telcos need to be a source of advice for infrastructure managers, and CIOs need to stop trying to put a round peg in a square hole. WAN optimization is great for WAN edge use cases, but for serious data movement, data acceleration is currently the only effective choice.
With increasing volatility in the world, the optimum method is to encrypt data when you are moving it so that nobody can gain access to it. However, when you are encrypting data, it normally has implications on the speed of transfer. It’s therefore important that telcos offer their customers a solution that allows for data encryption, data compression, or de-duplication, without impacting on network performance or penalties on speed. This requires a data acceleration solution that enables the ability to replicate securely or back up huge volumes of data, whether to a datacentre or to the Cloud or to a hybrid environment.
When telcos offer their customers this type of solution, they will ensure their customers gain real value from their networks. And that could help reduce churn.