OT - Server Power Summary

Hi guys

I am looking again at air conditioning for a proposed server room.

While I don't mind over-egging it, I would like to know how much custard I'm throwing around here.

The servers in question are HP DL360 Gen 9s which are each fitted with 2

800w powers supplies. My understanding is that one of these will be a redundant standby supply which means the server should run happily on just one 800w supply.

However, there are figures quoted at the bottom of the parts specification for the server which state:

Power Summary (based on 100% utilization)

Total system VA rating 343.35 Total system BTU/hr 1154.31 Total input system current 1.56 Total wattage 338.51 Power supply efficiency (0.93) Power factor (0.99) Inrush current (30) Leakage current (0.75)

The company group IT dept seem to be suggesting that we should be providing 1.6Kw of cooling for each server (for the two 800w supplies). My understanding would be that the server power consumption and therefore ability to generate heat would be limited to the 343.35VA. Not sure what the difference is between VA and watts to be fair, but this gives a factor of 5 between the two figures. Anyone able to clarify whether either approach is correct please?

Thanks

Phil

Reply to
thescullster
Loading thread data ...

In normal running it will take 50% from each PSU, if one fails 100% from the remaining PSU, is it just a single server? Is it a room or a broom cupboard?

Reply to
Andy Burns

Having had real experience of medium computer rooms - you *must* have n+1 redundancy at the minimu unless you are willing to shut machines down (and air con repairs are often not fast, even on contract).

So you need 2 air con units of the rating that one alone can do the job. Or 3 with 2 able. If you choose n+2 or a higher redundancy you can do that, if you want extra protection against failure, but IME it tends to be one unit will fail.

And yes, you are right - you need to size for actual consumption - but peak consumption, not idling. If you cannot measure that, sizing by plate rating is safer.

BUT you are right - if the server runs on 1 PSU of 800W, you size for

800W of cooling not 1600W. The server does not decrease its load to match failed PSUs. But watch out for 3/2 PSUs (runs on 3 or 2 but not 1

- eg some big disk arrays or network switch stacks).

Other comments:

Will you cooling be on UPS? If not, things get warm real fast when the mains fails - been there with about 40kW of UPS protected servers and no air con and no windows! if not, automatic orderly shutdown of machines would be advisable - perhaps with a whole room temperature logger/sensor like computer rooms used to have.

And water - air con units are prone to spilling water (condensate) when their drain gets blocked or something else happens. It can be a very good idea to have some leak alarm around the units or secondary containment under them (big tray).

Reply to
Tim Watts

Yeabut these places seem to grow stuff, least all the one I've been into!

Reply to
tony sayer

En el artículo , tony sayer escribió:

+1, in spades.

I've specified, designed and had built two server rooms for university departments. It's always cheaper and easier to over-specify the aircon in the first place, rather than try and retrofit additional cooling later. (ps. Do the same for the power feed!)

You can be sure some git will wander along and say, "uh, I've got a little rack mount server, it's too noisy for my office, can you install it in one of your racks? It won't take up much power or heat" and it'll turn out to be some 4U monster containing three dozen Xeons running full pelt 24/7 doing astrophysical number crunching.

In the case of the second room, for server plant with an estimated maximum heat output of 30kW, two 33kW floor standing aircon units were installed in such a way that they circulated cool air around the five fully-populated full-height (42u) racks. Most of those contained HP Proliant DL360G3/4/5 servers, plus disk arrays of various makes and vintages. The target temperature of the room was 19C.

That was ok, but I neglected to allow for that fact that the room gained a lot of solar heat via sunlight in the summer with a large bank of floor-to-ceiling southwest-facing windows. One day, despite the aircon, the room temperature reached 39C and stuff began shutting itself down. We had to have reflective tinted window film and roller blinds installed to deal with that and even then it was touch and go.

Reply to
Mike Tomlinson

Yours, for the average case, theirs, for a possible peak case.

Reply to
The Natural Philosopher

You use the Watts figure for aircon, and the VA figure for sizing the power circuits.

Buy a watt meter or two for dual supply systems (around £10). Check the power consumption of each system. Allow for growth (both extra disks and cards in systems, and extra systems and storage arrays - consider how much spare rack space you have and space for more racks). Add power consumption of lighting and any other electrical equipment, unless it will normally be off and controlled by occupancy sensors. If the room contains electrical switchgear and distribution boards, allow for heat given off by those. If you expect people to work in there for extended periods, add 100W per person for non-physical activity, or 200W for people doing things like assembling/racking servers. If the room has windows, calculate the solar gain (this can be very considerable). Calculate heat leakage into the room from non-airconed adjacent areas (including any room below and the outdoors) at a 10C temperature differential. Computer rooms are usually made airtight for effectiveness of automatic fire extinguising systems, but if yours isn't and it's large, you might want to allow for 2 air changes/hour (again at 10C temperature differential), or more if people will be frequently opening the door.

Reply to
Andrew Gabriel

Power factor of about 0.9 is a pretty good worst case estimate. I had to take a power and mass inventory of a small but extremely dense computer room a few years back (for floor loading and aircon). The main power distribution panel had a power/VA/PF meter built in. This was mostly HP and Viglen gear IIRC, total power in the few 10's of kW.

Reply to
Tim Watts

Thanks to all

In answer to the questions raised:

It is a large 2.4m square cupboard without windows. There will be a number of devices including servers, switches, storage and related IT stuff.

Phil

Reply to
thescullster

redundancy you can do

I'd agree with all that, and add that for the monitoring, have a look here:

formatting link

I installed a 4-channel SNMP temperature monitor, and there's a Linux box with an old Nokia mobile attached for SMS alerts, and it's worth noting that many UPSs have an SNMP card you can use as a sensor.

Reply to
Chris Bartram

We had the reverse - an aircon unit blew, and it turned out we hadn't allowed enough slack. Luckily it was a cold dry day, and we had windows we could open. As the OPs is small you might be able to get away with "in case of emergency open the door and station a guard".

Andy

Reply to
Vir Campestris

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.