What happened? One fire took down four data centers?
Yesterday, DataCenter Knowledge published several details about the one fire that took down four data centers:
A fire early Wednesday morning (March 10) destroyed one of OVH’s Strasbourg data centers and part of a second one.
The fire destroyed OVH’s SBG2 data center completely and four rooms in SBG1. UPS was down in the SBG3 facility, and the remaining SBG4 data center had “no physical impact”.
The company said it did not anticipate restoring power to SBG1 and SBG4 until Monday the 15th and SBG3 until Friday the 19th.
The article reads like Agatha Christie’s novel. Number two passed away, unfortunately. Number three got hurt, but is still alive. And number four is unharmed, thank goodness. Because there’s no mention of number one, we must assume it is okay. Unfortunately, the final chapter’s disclosure is a sobering surprise: one fire took down all four data centers.
What’s more, they won’t be back up soon.
Infrastructure and data center design matters!
Disasters happen. That is why backups and disaster recovery plans exist. However, data center and infrastructure design and operations matter a lot too. I remember arriving at Heathrow several years ago, when ‘data centers hiccups’ caused BA’s global operations to come to a grinding halt. This happened in Strasbourg:
Many customers of OVH, which is the largest Europe-native cloud provider, complained on Twitter about downed websites and applications hosted at this campus. Some appeared not to have disaster recovery plans or backup sites.
It will be interesting to read the next chapter in this story. With the information presented today, I would not be surprised if several customers had the impression that they were safe. Because they enjoyed high(er) availability given OVH’s multiple data center setup.
Most likely, the numbering at OVH’s Strasbourg site is the result of an efficient expansion plan. When the return on these investments is high and capital available, why not build another adjacent data center.
Too often, one can read between the lines that providers have grouped multiple data centers that are within a walking distance into one “region” or one “availability zone”. Or worse, data centers with just a wall or only a floor separating them. Or, in addition, data centers that share a single UPS or share the same data lines.
Real 'Availability Zones' and 'Regions'
When AWS built their global cloud infrastructure, they engineered and coined “Availability Zones” and “Regions”. Furthermore, they made many significant changes to the conventional wisdom of data center engineering. Designs to ensure high availability.
Fortunately these concepts and principles are copied by others. Unfortunately, the terminology also got watered down too. Even some hyperscale providers struggle to build sufficiently redundant and high availability infrastructures while expanding the number of ‘regions’ and services.
Numbering data center expansions combined with a cosmetic zone-layer does not add value nor any guarantees. Providers may choose to expand and fix things when they break. The question is: who ends up paying the bill? Customers must take a close enough look when making choices where to place their workloads and data.
Strasbourg: the sequel
OVH is one of France’s unicorns and the article promises an interesting sequel:
Among cloud providers that aren’t the big three (AWS, Azure, and Google Cloud), OVH is one of the more popular ones. Just two days ago the company said it had started the process for a potential public listing in Paris.
OVH operates 15 data centers in Europe and 27 in total. The company has disclosed its intent for an IPO earlier this week. We can not wait to read the prospectus’ risk paragraph. What will OVH do with the funds from the IPO? What portion will it have to invest in fixing redundancy? We may know soon.
When you want to discover how you can take advantage of cleverly engineered Availability Zones and Regions? Get in touch and send us a message.