Moving BT master socket, is this frowned upon?

I regret that I disagree with you - using an analagy it's like you saying that your car is fitted with seatbelts so you are immune from being involved in an accident!

Equipment safety functions can - and do - go wrong. So whilst your PC (etc) might work just fine in a given temperature range the scope might change very markedly if say the internal fan fails. By way of example if the PC is operating within its safety limits with an ambient temperature of 35 degrees C and then an internal component goes out of spec then a temperature in the equipment could rise markedly and lead to a fire.

I simply don't think that having a single point of failure within equipment is a good idea, if you are continuously relying upon it to save your bacon.

Your home insurance company might well take a dim view of paying out if your equipment were to catch fire. Whether or not they can be challenged on their decision is another matter - but you can feel comfortable that they can afford a bigger team of lawyers than you can.

PoP

-----

My published email address probably won't work. If you need to contact me please submit your comments via the web form at

formatting link
apologise for the additional effort, however the level of unsolicited email I receive makes it impossible to advertise my real email address!

Reply to
PoP
Loading thread data ...

That's rubbish. Electronic equipment can and does fail when operated outside of its specified temperature range. The spec for any piece of equipment is a worst case figure and you may well be able to operate a particular example well outside that spec but there will be a limit at both high and low temperature.

The low temperatures we get in this country (even left outside) would not generally be a problem for most electronic equipment if left powered up to keep it warm.

So? The air con failed and the ambient temperature became too high. It has nothing to do with rate of change. Even if you had raised the temperature over the space of a week you would have seen the same failures.

That's what they're designed to do. Take them outside their envelope and they *will* fail eventually.

Andrew

Reply to
Andrew

This kind of testing is looking for permanent, physical, failure modes.

Operating at too high a temperature in a loft is unlikely to do permanent damage (unless the temp is really extreme) but will cause malfunction.

Andrew

Reply to
Andrew

Well, it'd be a boring group if we all agreed on something ;-)

are immune from

Not saying that at all, just that using my not insignificant knowledge of computer management in large datacentres, across an enormous range of kit i'm just saying that measures such as these from Andy:

:I dealt with it by making an insulated cabinet and arranging two fans :with ducting in and out from the outside and inside the house. The :fan speeds are controlled by a temperature sensor and motor :controller, and there are servo controlled dampers in the ducting.

are overkill in my mind, and still do not mitigate for equipment internal cooling fan failures.

Equipment will catch fire if it is so predisposed wherever it is located.

If this was caused by loft overtemperature, then I agree, there will be some culpability. If we have a long hot summer reminiscent of '76 then I will consider secondary cooling, or relocation of the equipment, until such time, I remain nonplussed.

I remain convinced my firewall will be up after its freeze thaw session last night, due in no small part to the microclimate around it ;-)

Cheers,

Paul

Reply to
Zymurgy

I know.

I was using this by way of example rather than as a cause. However, higher temperatures, even on average do increase failure rates.

The main point of maintaining suitably low temperature, as you say, is to prevent malfunction.

.andy

To email, substitute .nospam with .gl

Reply to
Andy Hall

Well... yes and no.

Datacentres use air conditioning, at not inconsiderable cost, to maintain a realtively constant low temperature because the equipment manufacturers suggest that reliability in the short and long run is improved.

A branded product will typically have higher quality components and more attention will have been paid to environmental factors than might be the case in a Hu Flung Dung superspecial.

The internal temperatures (which is ultimately what counts) of PC and more sophisticated networking equipment can be monitored and logged and used to trigger shutdowns in the event of problems. This might be through ambient temperature rising or specific equipment fan failure. I tend to fit extra cooling fans to equipment and power it separately to the equipment PSU anyway. This is a pretty cost effective way of working because fans are cheap.

Temperature monitoring inside is easy enough to do as well, and it is also simple to look for sudden rates of change of temperature and excessively high temperatures and to shut things down. For my particular requirements, I can selectively do that and still have enough redundancy to keep working as I need to do.

Using controlled external cooling air is quite common for equipment racks as a means of maintaining a reasonablly constant temperature and keep things working in circumstances where the high temperatures would otherwise result in malfunction or failure. As always, it's a cost/benefit trade off, but I've found that the way I've done this works pretty well in terms of implementation and running costs.

.andy

To email, substitute .nospam with .gl

Reply to
Andy Hall

The air conditioning in datacentres usually also performs environmental scrubbing operations - removing dust particles from the air.

If dust were allowed to continue circulating then the equipment motherboards (etc) would over time acquire an overcoat of dust. That could potentially cause localised overheating at chip level.

However I digress. I'm showing my own long-term knowledge of working in a maintenance role in datacentres :)

PoP

-----

My published email address probably won't work. If you need to contact me please submit your comments via the web form at

formatting link
apologise for the additional effort, however the level of unsolicited email I receive makes it impossible to advertise my real email address!

Reply to
PoP

I have to agree. Ther are failure modes associated with temperature cycling, most;y mechanical stress leading to failures of joiints and frit seals on chips, but by far and away the usual cause of semiconductor PERMANENT as well as TEMPORARY degratation is overtemperature.

Most commercial equipment can reasonably be expected to work between 0C and 40C. The chips themselves are generally in spec between -5 and 70C, but that is not the whole story....MIL spec stuff is rated between -25C and 125C

Precisely. Internal air temps over 50C are almost certainly indicative o very high junction temperatures - go ovcer 175 junction on MOS and its 'good night, vienna'.

Mostly thety stop working before they fail. Chips are made to lie withing specs, but no manufacturer in the world designs his kit to accept components that are all at the worst possible end of the specification spectrum.

Insdtead a monte carlo analyisis is done at best. In practice what actually happens is that the designers do their best, a few prototypes are temperature tested, and the production goes ahead. If lots of users report a similar problem then the design may be examined, but mostly they just get replacement boards. Its cheaper.

Even MIL spec kit os not necessarily designed to any different standards, but it may well be sample tested in an envoironmental chamber to ensure it works over the specifed range.

Reply to
The Natural Philosopher

Equipment will catch fire if it is so predisposed wherever it is

The coolest place without forced cooling on a hot summers day is in open shade. IF you have efficient loft ventilation, that is often the loft.

If there is restricted airflow and e.g. a tled roof, then the best option is to buld an insualted room in teh loft, and arrange significant ducts to it and force air through. Air temp seldom exceeds 30C in this country above street level, and altho this is on the high side for consumer equipment, its not a huge problem if there is an adequate supply of it.

Low temperatures are not such a problem. Most kit will do -5C all right. Semiconductors lose gain and get slower as they get colder. Sometmes this is enoug to cause timing errors, and a system crash. Mostly its higher temps that do the harm tho. I have fixed several recalcritant servers by blowng s**te out of/replacing the fans and getting them working again. Some died permently. Thse were all SUN SPARCS BTW.

Reply to
The Natural Philosopher

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.