BA -----"Power Surge" was to blame???

My wife worked for a British company that had outsourced some of its work to India. It was a long-standing joke that the mains power to the company in India was *far* more reliable that that to my wife's company in a business park in Oxfordshire: the Indian power "never" failed whereas the UK power was "always" failing (power cuts, either momentary or for an hour or so).

Reply to
NY
Loading thread data ...

The systems involved were critical to their business.

SteveW

Reply to
Steve Walker

A lot of reputational damage and it won't help this year's results, but I'd have thought BA is "too big to fail".

Reply to
newshound

The smug shall inherit the earth

:-)

Reply to
newshound

My old GP, now long retired, often used to use the acronym GOK. But only in suitably select company.

(God only knows, for any who havn't met it before).

Reply to
newshound

formatting link

Reply to
harry

power?

remotely

True provided each of the dual independnat PSU's was actually connected to the seperate independant supplies. Not just one of them as described recently in here (from ARW?).

Or Sods Law struck, one of the dual supplies was unavailable, maintenance? And the other fell over for some reason.

If it has functional PSU's and power...

It's that bit failing that is really odd but one assumes that there is some sort of control over which datacenter is "live" to prevent switching to/fro too often or too quickly.

Agreed, if nothing else it's a big wake up call for everyone. Our society is almost totaly relient on the IT systems working.

Reply to
Dave Liquorice

In article , "Dave Plowman (News)" writes

Racist.

Reply to
bert

And part of that is network connection. How many installations have independent routing of dual network connections without a single point of failure - anywhere?

Reply to
bert

I've specified a couple - both industrial PLC/SCADA systems rather than datacentres though.

SteveW

Reply to
Steve Walker

Remember also that every extra level of security on a system has a cost.

Total PITA for those caught up in it of course, and likely to cost millions but, globally, one such failure a year might be tolerable.

Reply to
newshound

This kind of stuff is *really* hard to get right. It's impossible to test (as someone as said, no-one's prepared to allow a "proper" test where you walk into the primary datacentre unannounced and whack the Big Red Switch) so all the tests are make-believe, and there are bound to be dependencies that no-one's thought of. And even if they have, making those people redundant and trying to run your data centre with minimum wage monkeys using 3-ring binders (written by people who are going to lose their jobs) to run systems they don't understand is a recipe for disaster. Add that to the fact that no disaster recovery test *ever* tries to recover everything at one go, unlike a real DR, and I'm surprised it ever works at all. Add to *that* that management whinge about the cost of the DR hardware that never gets used (although you can use it for development, if you're careful and prepared to cease development in a DR situation - although you may then not have anywhere to test recovery when DR fails...)

Yep.

Reply to
Huge
[47 lines snipped]

I'd be prepared to wager it won't work. I have two real-world examples where it didn't;

- Trading floor where all dealer's workstations had dual network cards and networks. Failure of one network caused a broadcast storm on the other network.

- Transatlantic link where connections went from different comms rooms at opposite ends of the building via different telcos. Who both bought bandwidth on the same transatlantic cable, which got dredged out of the Channel by a trawler. Bingo. No transatlantic network connectivity.

Reply to
Huge

It now appears that the power surge was in the negative direction ...

formatting link
power/

or

formatting link

... which surely begs the question: what were Adam's apprentices up to at the time?

Reply to
Terry Casey

And by Murphy's Law in the place where it could do most damage!

Indeed. Sounds like a H&S investigation into unsafe working practices and safe systems of work on mains power kit would be more appropriate.

It also begs the question as to why they allow untrained contractors to wonder around in such sensitive critical areas unsupervised.

It does explain all the damage done though if on realising their mistake they just threw the switch back again and prayed.

Reply to
Martin Brown

transformers

diversity

Didn't say that a diversity mains supply was cheaper than a couple of megawatt gensets. Note plural, if you only have one set you have a single point of failure, even if you aren't running on it it still needs maintenance periods and Sods Law dictates that the mains will fail just as some part of the set has been removed...

Fairy Nuff, as you say diesels are cheaper and if properly and regulary tested a diversity mains is a bit of a luxury.

Search

The lighting level and eveness is more down to the football authority/competition than TV. Modern cameras can make quite low lighting levels look a lot better than they do to the eye. The lighting load for most Premier League stadiums is in the region of

400 kW (200 ish 2 kW luminaires).
Reply to
Dave Liquorice

Oh I agree, each bit of the system from 6" patch lead to UPS has numereous failure modes and failure triggers that all need to be taken into account. I guess it could be modelled at the design stage and the model played with to see what may heppen if you give it a poke just ... there.

Building the model would require pretty detailed information about the kit. Can it stand the power cycling? At what rate, for how long? What happens if the ambient temperature rises? etc etc. Equally hard is thinking of how to poke it. Would anyone have thought of shutting down the UPS then restarting it a few seconds later? Would the model behave like the real UPS under that scenario? After all no one would shut a live UPS down then restart it would they? B-)

Reply to
Dave Liquorice

Its not that hard, BTGTTS. However its bloody expensive to do it right.

There are also significant difference between systems designed for high availability and systems designed for data integrity.

For instance google would be designed for high availability and a bank system for data integrity.

We used to do those sort of tests on the hardware and software before every release. It took teams of people to do it.

Reply to
dennis

It would be intersting, I suspect most places do have some form of backup 'net connection even if it's only ADSL/VDSL. Making the assumption that the primary is FTTP.

Not having a single point of failure is the hard one, especially for parts of the system that are not under your control. I've got three 'net connections, ADSL, wireless and 4G, also a UPS and genset. Only the ADSL stands any real chance of working if the local Primary substation loses both it's supplies. We are on different 11 kV distribution to places in the wireless and 4G systems that need mains but both of those share one place...

Reply to
Dave Liquorice

Is that because you know its on a different power feed because the chance of adsl being battery backed up at the exchange is close to zero. The telephony part should be backed up so you might get away with a dial up modem.

Reply to
dennis

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.