Choosing a UPS for a home setup Advice please.

My UPS would not talk to my server (only windows was supported by the UPS software and NUT never worked with it), so I had to resort to the server detecting when it lost power on one of its dual PSUs - not ideal, but it did work.

Reply to
SteveW
Loading thread data ...

As it is now 62 years ago I can tell a tale about UPS. Final year apprentice, with a car! I was often sent to sites to stand by during testing of our electrical equipment. This occasion was the American Air Force. Communications set up with a specification of 10mS interruption of supply only. Our device was a transistor power supply for the windings of a permanently running battery fed alternator. Might have been 50 lead acid batteries in that room! The test failed miserably. Basically the winding inductance opposed the level of current change needed to maintain synchronous speed.

The solution was to fit a massive flywheel:-)

>
Reply to
Tim Lamb

With the addition of a changover switch for the boiler (and maybe lighting) supply, our EV would allow that too.

Reply to
SteveW

In the '90s I worked on the works testing of two 24MW generator sets for a North Sea rig. In each control room was a huge stack of lead-acid batteries to supply a DC lubrication pump (in an emergency shutdown, the gas turbine front bearing would melt from residual heat without the lubricating oil circulating). The requirement was to supply 22kW, at

110V DC, for a minimum of 5 hours!
Reply to
SteveW

My problem is that as currently configured the central heating comprises mains powered thermostats all over the place and mains powered fan blown heaters as well as the underfloor.

It's just too complex to UPS it all. Be more sense to have a 5kW diesel genny outside in a shed...

Reply to
The Natural Philosopher

I dont run RAID for that reason.

I just have mirror disks, synched once a night.

RAID is not for backup, it is for high availability. No domestic setup needs RAID.

...

So whats the point of RAID? In short its not a data protection strategy at all, its high availability to data in a data centre

They are not kept alive. They shut down smoothly. Probably have big capacitors to flush writes to the NVRAM if they detect power failure. I pull the plug on mine often, without issue.

You appear to be babbling.

Flash doesn't need power to retain data, It will only cache data in RAM for a short while before writing it, and the time to write it if external power goes down is microseconds. It doesn't need a supercap. An ordinary one will do. And it doesn't need 'advanced power fail detection' a simple monitor of supply voltage on the far side of a diode will serve to tell them they have lost external power and to flush all caches to NVRAM now.

How do you *think* SSDS would work if you shutdown the computer they are attached to? They don't have a 'shutdown' signal. They just get the power removed. They presumably simply DO monitor supply voltages .

Reply to
The Natural Philosopher

I saw the same on the Decca Elizabethan used to test radar systems. Before takeoff there was a massive whining under the floor 'what's that? ' 'Its our rotary converter: powers everything in the racks off the aircraft 48V' 'Why not use transistors?' 'because when the wheels go up we are lucky to get 24V out of the batteries: the rotary converter has enough spinning mass to keep it all stable'

Reply to
The Natural Philosopher

The one piece of advice that I would give you, from learning the hard way, is when you receive the UPS, bite the bullet and actually make sure it works while it's still in warranty, and do it at a time when you can manage without the various computers, both when you are shutting them down cleanly to allow the UPS to be interposed between the mains and the computers, and when you simulate a loss of mains and hope that the UPS will work.

My wife bought an expensive PC which came with a UPS - probably rated at

700 VA since that was a common size at the time.

We didn't test it at the time, and only got round to trying it a year or so later, by which time its manufacturer's warranty had expired. Add it was as dead as a dodo. Even after leaving the battery on charge for a couple of days (vastly excess of what its battery should need), it would not supply any power.

I took it out of the PC setup and tried it in isolation. It lit a 60 W tungsten bulb for about 5 seconds. It lit a 7 W LED bulb (Philips Hue, IIRC) for about 15 seconds. Powered by the mains, and connected to a PC by mains and by its USB monitoring connection, with the PC running the UPS's monitoring software, the UPS reported the battery state as "excellent" but the UPS would not run off that battery.

This is something I have experienced in several situations. I have a Samsung laptop and its battery is reported by Windows and by MX Linux (booting off different HDDs) as 100% capacity while connected to the mains PSU/charger, but as soon as I unplug the PSU, the laptop turns off instantaneously.

It seems that laptops and UPSes can "see" their battery as holding a full charge, but as soon as you remove the power input, the device stops working, as if the battery is really as flat as a pancake.

IN my experience, don't trust the UPS hardware and software to tell the truth - test it periodically by simulating a power cut.

And simulate the type of power cut that you typically get. It may be a long power cut or it may be a series of brief 1-second cuts in rapid succession. My Windows 7 PC seems to be fine with a single power cut (whether brief or several hours) and will always boot fine afterwards. But several 1-second cuts at 10-second intervals will knacker the HDD. Not irreparably, but enough to require a very long file-system check before it will start to boot.

Reply to
NY

My NAS, PC and UPS talk to eachother via USB. Rather than setting a time to run on battery power, I've set mine to shut down when there's 5% charge remaining.

I only have a small UPS - the point being to let the PC and NAS shut down gracefully avoiding any disk corruption, rather than to let me carry on working during an extended power cut.

Reply to
Reentrant

True RAID is not a backup strategy, but it is a layer of fault tolerance that can save downtime. You can also get increased throughput with the higher level RAID categories as well.

Reply to
John Rumm

I have mine set to initiate shutdown when the reserve power reaches a threshold (like 20% remaining). That still leaves some scope to power something back up again during a power cut if it became necessary.

Reply to
John Rumm

Exactly the model (1200VA) I am contemplating - but when I placed an order for one, the supplier insisted that Cyberpower required a signed statement that it was to be supplied as a commercial item, and limiting some of a retail customer's rights. They also claimed that this didn't reduce my statutory rights - but I didn't accept that.

I intend to use the USB connection to do a controlled shutdown of the NAS - that being the most vulnerable item. If the PC is running, I should be sat in front of it, or close enough, and so able to do a manual shutdown in plenty of time.

Reply to
Sam Plusnet

Thanks. That sounds like very good advice.

I've read reports of people who said their UPS claimed to be at 100%, but then fell over in a few seconds with only a minor imposed load. However I hadn't thought of the warranty aspect.

Reply to
Sam Plusnet

Indeed. Choices made should reflect the expected use-case. I rarely play games now, and the ones that I do are not going to stress a modern system too much. If a power cut (greater than a second or two) happens, I want the UPS to hold up long enough for me to save files and safely power down all systems - not to carry on playing some cutting-edge game at max settings for the next hour or two.

That's another good point. You buy a UPS to support the setup you have

- but we all tend to change kit from time to time. It usually pays to be generous when specifying.

Reply to
Sam Plusnet

Good advice, but we don't have to simulate them here.

Reply to
Tim Streater

Yes. My RAID has a battery backed buffer and is very tolerant of sudden shutdown and of course of a disk failure, it also does increase throughput and allows a greater total capacity than mirroring. Of course I do back up to another place.

When I first got a RAID card (same as the one I use now, but dedicated to the server it was in, rather than a more generic PC), I set it up with a single disk, for testing. I set it to RAID 1, but didn't fit the second drive. All worked well, although it obviously reported a fault with the second drive. I then fitted more drives and told it to convert the array to RAID 5. During the night, part way through conversion, we had a power cut. When I restarted the server, conversion continued from the same point and completed later that day, with no problem. At all times during the conversion, the test data remained available and intact. I was impressed.

Reply to
SteveW

A lot of what I've learned about SATA SSDs, comes from reviews on the Anandtech site.

They used to show pictures of controller boards. They would show the area on the consumer PCB, where pads were available for a Supercap plus an SMPS powered by the Supercap, but the components were not on the PCB.

Yet, on an Enterprise drive they reviewed, the Supercap and SMPS were populated.

This suggested that reliable shutdown procedures on the Enterprise drive were established by:

1) Advanced power fail detection. Noticing that the external rail was collapsing. 2) Operation of the Supercap plus SMPS, to continue operating the PCB. 3) Put-away of critical data. You would not go to this much trouble, unless there was a reason to be doing this. Some SSD drives have a DRAM cache, and some are cache-less. The cache-less ones potentially have less writes to do at shutdown.

In the case of the Consumer drive, there is no backup power, and the size of bypass capacitors is limited. You cannot use a too large bypass, because if the SSD is connected to a USB bridge, the capacitance would violate the 10uF limit on USB peripherals (the inrush concern and rail collapse issue).

Reliable recording of the virtual to physical sector map inside the drive, must be implemented in some other way. With no backup power system, if a consumer SSD drive loses power, it has *no* resources to help itself. Without a backup power source, it would have to frequently either record or update the virtual to physical mapping table, as the table contents changed. Without the mapping table, the information inside the SSD drive is scrambled and unusable. Sector 0 of an SSD is not a location 0 in the flash. The sectors move around, according to wear leveling requirements.

While it has been mentioned previously, that critical data is stored in flash devices, in an "SLC-like" small area, this isn't good enough, because it does not have the write-life for the frequency of updates required. The SLC-like area would be good enough, if the drive only had to write that area once, at shutdown. How many blocks could you write, using a 10uF cap as a power source. The answer is: not many.

It's not obvious what method is used to make consumer drives reliable. Yes, I've had the power go off here, and mine survived. It would be comforting to know what the method was, as a means of estimating how reliable it might be.

As an example, someone in one of the other USENET groups, is the equivalent of Geek Squad. He deals with consumers and SOHO/small business people. He fixes their problems, does their updates, designs automated backup schemes. He also sells them equipment. In particular, Samsung drives.

He's had some returns, drive failures. Well, it would be nice to know what those customers did, to have those drive failures. I don't know the ratio of drives sold to returned units. I've had no trouble here, but my sample size is tiny and meaningless.

Early SSD drives were terrible. And it was an article about Intel entering the SSD drive business, and getting their hands on the source code of the firmware, and doing a Picard facepalm when they saw what typical firmware was doing. So at least initially, the firmware was flawed from an algorithm perspective. But there were no further details, on whether they shared what they observed, with anyone else.

Even hard drives have had algorithm failures, one of which caused a data structure the drive relied upon, to corrupt roughly one month after the drive started being used. You could recover drives with that failure. It involved putting a piece of cardboard between the head cable pads and the head cable. Operating the drive, without the drive being able to read the platter. Typing in two cryptic commands into the drive TTL-level serial port. Then, pulling the cardboard away and seating the PCB. And then your data was accessible again. Some other drive issues have been fixed by replacement code images. That's a quality issue, rather than a too-many-noobs issue for the industry.

Hard drives solve the power problem, by turning the motor into generator, by modifying the H-bridge switch settings. Power from the (generator), is used to power the voice coil and cause the heads to retract up the ramp. but at the same time, some "last writes" get done too. On some of the newest drives, drives which have 512MB cache chips, the drives have been equipped with additional flash memory on the hard drive controller board. The flash memory receives the contents of the 512MB cache, as the drive is doing emergency power fail procedures. This is only on the most expensive drives. Drives with 256MB cache, the drive seems to have the time to write the cache to the platter. And that's an example of a "carefully budgeted" emergency procedure.

Do SSDs have a procedure they could tell us about ? I'm listening.

They're not issue-free.

formatting link
Paul

Reply to
Paul

They do a battery test, once a day. A kind of impedance test.

If the battery fails the load test, the unit will beep once.

There may also be a button on the casing, that does a "flip to battery" inverter test. It simulates loss of mains. But even without inverter testing, it will check the battery for you.

Modern UPS can have a tiny display on them and a CPU, and so there is a better chance the response will be intelligible. On my old one, it's all meant to be mysterious.

On an SPS UPS (the lowest form), the chassis is made of a piece of heavy steel. And it is ice cold to the touch. If you feel the chassis, and the chassis is a bit warm to the touch, this means cells in the lead acid battery have failed short, and the battery has changed from a 12V battery, to a 6V battery with three cells shorted.

By feeling the heat, that's when I knew it was time to take it out of service.

High end ones, double conversion with sine wave output in server rooms, those have a fan, and the inverter is running all the time. Those are more likely to be well designed. And do a more thorough job on test.

At work we bought more than 100 UPS in a bulk purchase, and put them on the office computers. These were cheap SPS (Standby Power Supply) type UPS. The failure rate off the pallet was 10%. Some units would not flip to battery on loss of mains. Some units would not flip off battery when mains power returned. The nicely distributed flaw types, suggest the units were not acceptance tested before being shipped.

Units ship with the battery disconnected from the unit. This might prevent some amount of self discharge. I don't really know what the "shelf life" of a boxed UPS is, in terms of not damaging the battery by leaving it sit.

Battery life, is all over the place. Original battery lasted

11 years (perhaps longer than what other people experience). The battery still had full voltage (no shorted cells), but no longer had good capacity. The battery impedance test once a day, indicated it needed help. But it still behaved like a 12V battery.

The replacement lasted 3 years, and the chassis got warm on the replacement battery life cycle. Measurement with a meter later, indicated that half the cells had failed short and it was then a 6V battery instead of a 12V one.

Checking the float charge (operating the UPS without the cover on it), the float was 13.5V, which is more or less what I expected it to be. If the float wasn't right for the replacement pack, that could lead to a short life. But the replacement was branded, so the company should have checked that the Chinese battery actually had the right characteristics. Even the *original* 11 year battery (with branding label adhered over the Chinese name), was a Chinese one, so it's not that it is Chinese that mattered. Something must have been wrong with the specs of the second lot.

And there still aren't a lot of lithium based ones. If done that way, they should be LFP (lower density, but happy go lucky). I'm not sure the UPS companies are prepared for the liabilities involved.

Paul

Reply to
Paul

Reply to
Sam Plusnet

I've noticed that the price of replacement batteries from the UPS manufacturer are terribly expensive. e.g. a 7AH 12v sealed lead acid Yuasa battery can be had for £15 or so. An (apparently) identical battery specified for a APC UPS is around £70.

For domestic users, is there a good reason to pay the extra?

Reply to
Sam Plusnet

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.