CCIs Graham Jarvis looks at one of the threats datacenters can’t defend against, flooding, and demonstrates a way to carry on delivering a service even when your servers are 10 feet under water
The severe floods that hit the north of England and parts of Scotland earlier this month devastated both homes and businesses. This leads to questions about whether the UK is sufficiently prepared to cope with such calamities. On 28thDecember 2015 the Guardian newspaper went so far as to say that the failure to ensure that flood defences could withstand the unprecedented high water levels would cost at least £5bn. Without doubt, it feels that this catastrophe could lead to many people and businesses facing financial ruin. The feeling is that the flooding could have been prevented if more investment had been placed into maintaining and building flood defences in the affected areas, and many commentators – including those affected by the floods – blamed Government cuts.
Vodafone’s experience
Vodafone was one of the high profile companies affected. The IT press said that the floods had hit the company’s datacentre. A spokesperson at Vodafone, for example, told Computer Business Review on 4th January 2016: “One of our key sites in the Kirstall Road area of Leeds was affected by severe flooding over the Christmas weekend, which meant that Vodafone customers in the North East experienced intermittent issues with voice and data services, and we had an issue with power at one particular building in Leeds.”
The flooding restricted access to the building, which was needed in order to install generators after the back-up batteries had run down. Once access became possible engineers were able to deploy the generators and other disaster recovery equipment. However, a recent email from Jane Frapwell, Corporate Communications Manager at Vodafone, claimed: “The effects on Vodafone of flooding were misreported recently because we had an isolated problem in Leeds, but this was a mobile exchange not a datacentre and there were no problems with any of our datacentres.”
Hurricane Sandy
While Vodafone claims that its datacentres weren’t hit by the flooding, and that the media had misreported the incident, datacentres around the world can be severely hit by flooding and other natural disasters. Floods are both disruptive and costly. Hurricane Sandy is a case in point.
In October 2012 Data Center Knowledge reported that at least two datacentres located in New York were damaged by flooding. Rich Miller’s article for the IT magazine, ‘Massive Flooding Damages Several NYC Data Centers’ said: “Flooding from Hurricane Sandy has hobbled two datacentre buildings in Lower Manhattan, taking out diesel fuel pumps used to refuel generators, and a third building at 121 Varick is also reported to be without power…” Outages were also reported by many datacentre tenants at a major data hub at 111 8th Avenue.
The possibility that a datacentre’s service could be disrupted by flooding and other natural disasters therefore raises the following question: Is having one disaster recovery site enough? Should there ideally be 2-3 of them? Many datacentres are often located far too close to each other, and fall within the same circle of disruption. This is a reflection of the limitations of their current technology and the fact that distance creates latency issues that have a major impact on data throughput.
Equally as worrying is the fact that a survey by Zenium Technology has found that half of the world’s datacentres have been disrupted by natural disasters, and 45% of UK companies have – according to Computer Business Reviews article of 17th June 2015 – experienced downtime due to natural causes.
“So I don’t care whether it’s Chennai, Texas or Leeds. Most companies make do with what they have, and they aren’t looking and casting their net wide enough to look at technologies that can help them to do this”, says David Trossell CEO at Bridgeworks. He finds that people are compromising on their business continuity and disaster recovery, and it’s only when a flood happens that it becomes top of mind again.
In his opinion business continuity is a company’s best insurance policy and by using the right data acceleration technology it is possible to locate datacentres in such a way as to avoid them being situated in the same circle of disruption. Business continuity needn’t cost them the Earth either – most organisations may already have the infrastructure in place, but they may not have the technology to exploit it.
Ignore FUD
Trossell’s core message is: “Don’t automatically go to the large fear, uncertainty and doubt (FUD) vendors.” He argues that smaller and more innovative vendors can better address issues that their larger counterparts have not yet resolved. In his opinion there are solutions available to help organisations to remove distance and speed limitations, enabling them to have a third offsite disaster recovery site, for example that doesn’t sit within the same circle of disruption to protect themselves as part of a viable risk reduction strategy.
“Having more than one site is necessary because disasters come in many forms”, explains Clive Longbottom – Client Service Director at analyst firm Quocirca. He says a single site should enable organisations to deal with lower levels of disaster, such as component failure, single-item equipment failure, on-site power failure, off-site power failure (through UPS and auxiliary generation) and so on. Yet he agrees with Trossell that one site isn’t enough to deal with flooding, fire, earthquakes, etc.
“Most datacentres can deal with a small flood – but when Mother Nature really shows her power, only so much can be done”, he adds. In his view it’s the data that matters most – not the hardware or software. “Business continuity can be provided through much more effective and cheaper means with warm virtual machines being hosted on a shared remote site – and ensuring this happens in the most effective manner requires intelligence across the wide area network (WAN)”, he explains.
Plan for continuity
The problem is that all too many organisations aren’t planning properly for natural disasters. Trossell warns: “Even those people whom are responsible for averting disaster don’t plan properly because they are just ticking boxes and they never seem to think that the impossible thing could very well happen.”
He emphasizes, that continuity is not about recovery. It is about preventing natural disaster from impacting on business operations and services to avoid financial and reputational damage.
So to ensure that Business Service continuity remains your best insurance policy he offers the following top six best practice tips:
- Place your datacentres at a distance from each other, and never within the same circle of disruption.
- Learn lessons from outside of the IT community in order to think outside of the box.
- Remember that disaster recovery is not always about natural disasters; it can be about hardware and software going wrong, human intervention or caused by terrorism. Such incidents can and do take datacentres down.
- Plan for recovery and not the disaster by understanding the costs of what would happen if your datacentre were wiped out.
- Test your Disaster Recovery plan because all too often they aren’t tried and tested.
- Define and really understand your Recovery Time Objective (RTO).
Longbottom advises that organisations should have two plans: Business Continuity and Disaster Recovery. Like with any insurance policy he says it’s also important to understand the business’ risk profile to define how much the business is willing to invest in IT service continuity. This risk audit should also consider whether it is advisable to locate datacentres in a different country, in different regions or on different continents to reduce the likelihood of a disaster natural or otherwise putting the organisation out of business. In the end such an investment will be cheaper and lot less disruptive.