My wife worked for a British company that had outsourced some of its work to India. It was a long-standing joke that the mains power to the company in India was *far* more reliable that that to my wife's company in a business park in Oxfordshire: the Indian power "never" failed whereas the UK power was "always" failing (power cuts, either momentary or for an hour or so).
True provided each of the dual independnat PSU's was actually connected to the seperate independant supplies. Not just one of them as described recently in here (from ARW?).
Or Sods Law struck, one of the dual supplies was unavailable, maintenance? And the other fell over for some reason.
If it has functional PSU's and power...
It's that bit failing that is really odd but one assumes that there is some sort of control over which datacenter is "live" to prevent switching to/fro too often or too quickly.
Agreed, if nothing else it's a big wake up call for everyone. Our society is almost totaly relient on the IT systems working.
And part of that is network connection. How many installations have independent routing of dual network connections without a single point of failure - anywhere?
This kind of stuff is *really* hard to get right. It's impossible to test (as someone as said, no-one's prepared to allow a "proper" test where you walk into the primary datacentre unannounced and whack the Big Red Switch) so all the tests are make-believe, and there are bound to be dependencies that no-one's thought of. And even if they have, making those people redundant and trying to run your data centre with minimum wage monkeys using 3-ring binders (written by people who are going to lose their jobs) to run systems they don't understand is a recipe for disaster. Add that to the fact that no disaster recovery test *ever* tries to recover everything at one go, unlike a real DR, and I'm surprised it ever works at all. Add to *that* that management whinge about the cost of the DR hardware that never gets used (although you can use it for development, if you're careful and prepared to cease development in a DR situation - although you may then not have anywhere to test recovery when DR fails...)
I'd be prepared to wager it won't work. I have two real-world examples where it didn't;
- Trading floor where all dealer's workstations had dual network cards and networks. Failure of one network caused a broadcast storm on the other network.
- Transatlantic link where connections went from different comms rooms at opposite ends of the building via different telcos. Who both bought bandwidth on the same transatlantic cable, which got dredged out of the Channel by a trawler. Bingo. No transatlantic network connectivity.
Didn't say that a diversity mains supply was cheaper than a couple of megawatt gensets. Note plural, if you only have one set you have a single point of failure, even if you aren't running on it it still needs maintenance periods and Sods Law dictates that the mains will fail just as some part of the set has been removed...
Fairy Nuff, as you say diesels are cheaper and if properly and regulary tested a diversity mains is a bit of a luxury.
Search
The lighting level and eveness is more down to the football authority/competition than TV. Modern cameras can make quite low lighting levels look a lot better than they do to the eye. The lighting load for most Premier League stadiums is in the region of
Oh I agree, each bit of the system from 6" patch lead to UPS has numereous failure modes and failure triggers that all need to be taken into account. I guess it could be modelled at the design stage and the model played with to see what may heppen if you give it a poke just ... there.
Building the model would require pretty detailed information about the kit. Can it stand the power cycling? At what rate, for how long? What happens if the ambient temperature rises? etc etc. Equally hard is thinking of how to poke it. Would anyone have thought of shutting down the UPS then restarting it a few seconds later? Would the model behave like the real UPS under that scenario? After all no one would shut a live UPS down then restart it would they? B-)
It would be intersting, I suspect most places do have some form of backup 'net connection even if it's only ADSL/VDSL. Making the assumption that the primary is FTTP.
Not having a single point of failure is the hard one, especially for parts of the system that are not under your control. I've got three 'net connections, ADSL, wireless and 4G, also a UPS and genset. Only the ADSL stands any real chance of working if the local Primary substation loses both it's supplies. We are on different 11 kV distribution to places in the wireless and 4G systems that need mains but both of those share one place...
Is that because you know its on a different power feed because the chance of adsl being battery backed up at the exchange is close to zero. The telephony part should be backed up so you might get away with a dial up modem.
HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.