DIY server: Good SATA HDDs

If you're really lucky you have one of those processors that runs triple-channel RAM.

But IAC no reason you can't stick 2Gb or two more 1Gb sticks in there.

Andy

Reply to
Vir Campestris
Loading thread data ...

Microserver = tiny little thing. Two RAM slots.

Reply to
Clive George

servers run few processes and file servers run even less.

90% of processing power and 90% of RAM on a desktop goes in supplying 40fps+ eye candy.

None of which is any use on a server.

In fact it can be a disadvantage if you cache too much disk in RAM when the power goes down...

Only if you have 100+ users and gigabit networking do you need file server RAM.

My online server services 1 million hits a day plus on 384Mbyte RAM - only if people do massive simultaneous SQL datbase searches does it run out..

Reply to
The Natural Philosopher

Or you could spend 1/4 of a million on one of these legacy server drives fr om a bank. Really big spinners:

formatting link

Reply to
Weatherlawyer

Not entirely true - if your server runs Jira say (tomcat based) or it has a special purpose like media conversion.

Reply to
Tim Watts

S'okay, you can rely on TNP to be wrong.

Reply to
Huge

"This Video is Unavailable :/"

Reply to
Johny B Good

Not so much 'wrong' (it's never quite as 'Black & White' as that), more a case of making "A sweeping Statement" that's only true for the majority of cases yet ever so wrong for a range of specific exceptions.

Most of us, imho, are guilty of making 'sweeping statements' from time to to time (I'm probably about to be accused, yet again, of making one here).

The problem then, if you're trying to make such statements, is that if you then list some 'counter examples' of 'exceptions' to avoid such an accusation, there's a real danger of then being accused of 'spewing out a lengthy post'.

This is a dilemma for anyone posting to usenet on a subject in which they have extensive knowledge and/or experience. Getting a balance between a snappy generalisation and a sufficiently detailed treatise on the subject can sometimes be next to impossible.

TNP didn't play it safe by adding a "IMO" qualifier (imho). Most 'home users' these days (imho) tend to run more than just basic file and print serving on their home server/NAS box, electing to maximise the ROI in the capital investment and electrical consumption costs by running other handy services such as a torrent client and/or media streaming with on-the-fly conversions which will need more cpu grunt[1] and ram (ignoring the ram requirement alone for a responsive ZFS set up).

Just like "The Big Boys", the home user can employ a modest UPS to eliminate the risk of data loss or corruption when relying on a very large ram cache. Most home users keen enough to run ZFS and other services with high ram (and CPU) demands will be aware of the need to use a cheap UPS. Those that don't will discover soon enough why they _should_ have added a UPS 'to the mix'.

I use a cheap APC BackUPS500 to guard against sudden loss of mains power to my NAS4Free box but, because it's so cheap (no monitoring interface), it's only effective if such an outage of power occurs whilst I'm around to notice the event and manually shut the server down.

It's not the best use of a UPS, but it's better than nothing. If it goes off in the middle of the night, the server will be left high and dry.

The only saving grace is that I'm not running a ram hungry ZFS and any torrent jobs won't suffer too badly as a result of such interruption.

Even if I actually had a media service running, it's still unlikely to cause grief (and also unlikely to be doing any actual streaming at that time of the night anyway).

Although I'm not getting the best out of my UPS investment, whatever consequences of a middle of the night power outage are almost certainly not likely to cause any data loss, just the inconvenience of fsck run time on some 11TB's worth of disk storage space.

I can accept this limitation since any file operation activities which increase the risk of data loss and file corruption are likely only to occur when I'm around to manually shutdown the server before the UPS falls over.

If I were to start using large amounts of ram to service the needs of ZFS (and any other high ram usage services), I'd upgrade the UPS to one that can be monitored by the server to cover the increased risk of data loss in such a set up. At the moment, I'm using UFS on four seperate disk volumes in the box. The 4GB of ram is a slight overkill for this setup but I only had a pair of 2GB dimms spare to service a dual channel ram configuration.

NAS4Free doesn't seem to want to take advantage of this as a ram cache since it's only availing itself of 18% of 3796MB right now, only rising to a mere 31% peak during a move of some 3GB's worth of data (3 files) after which it dropped back to 19%.

It seems that even 2GB of ram is unlikely to be stretched by such activity (I can see myself re-purposing the pair of 1GB dimms I pinched from the refurbished Dell Dimension E521 to replace the pair I was using in the bench test rig (one of the dimms died suddenly in the test rig and they weren't a 'matched pair'- as if that really mattered

- it looks 'nicer' now though)).

[1] Annoyingly, the single order of magnitude increase in throughput offered by Gbit ethernet over Fast ethernet seems to need a couple of orders of magnitude increase in CPU 'grunt'.

If you're running GBit ethernet, you're going to have to use a much more powerful CPU than you could cheerfully get away with on a 'simple fileserver' connected to a Fast Ethernet LAN even if the ram requirement remained relatively modest for such service.

Reply to
Johny B Good

Do they? I dont nor do most peeople I know.

Torrent clients are VERY low CPU and RAM bys design., Only media conversion soaks up cycles and RAM nd most people need to interact with that so use desktops instead.

Would you believe that Gridwatch and 15 other sites on te same machine run on 384Mbytes of RAM in total? you don't need cycles to do what you are not and will never do .

The difference is that I have been designing internet servers since the internet arrived. I know what they take and what they need.

Which means I CAN speak with authority. Because I have implemented most server types in one way or another.

What grinds CPU is media conversion. Applying the same algorithm to millions of frames.

What grinds RAM is multiple processes. Or manipulating huge graphic objects. And file caching if you let it.

What grinds disk IO is massive access to disks beyond the level of available caching.

Its very unlikely that any home server would be engaged in any of those sorts of activities.

which is why my server here is an atom based fanless with 512Mytes of RAM and simply doesn't even need that.

And is able to full saturate a 100mbps link via smb or NFS as a file server, and still have lots in hand to stream media to the TV or run some basic web services domestically.

It is more hampered by the 100mpbs than anything else.

Reply to
The Natural Philosopher
8<

I can believe that, I had to upgrade to gigE to get maximum performance from my arm based NAS boxes.

You don't actually need much CPU or RAM for most things, I used to do digital video capture on a laptop with 512M ram and a celeron 600. I could get lossless capture and edit the stuff. It was a bit slow at converting to DVD though.

Reply to
dennis

Thankfully, it's relatively cheap to 'over-provision' cpu grunt these days and still enjoy the benefit of reduced power consumption with modern power management features.

In your opinion. Since such additional features as streaming multimedia and running a torrent client service have been now part of the advertising blurb for most commercially made SoHo NAS boxes for some time now, I'd expect a significant (if still small) proportion to be using those extra features. Imo, such usage would be far from "very unlikely".

It's true that whilst the percentage of our peers posting here running their own homebuilt servers is likely much higher, we only represent a vanishingly small percentage of the whole market so it's easy to get an impression that 'the whole world and their dog' are using NAS boxes and file servers to run services other than just basic file and print serving.

Yes, it's this last limitation that eases the cpu requirements considerably over those of a Gbit connected server. :-(

Reply to
Johny B Good

It's available to me, but has nothing to do with disk drives, although I didn't watch it all through...

Reply to
Andrew Gabriel

ZFS doesn't need a UPS, because it's never in a state where the image on disk is inconsistant, so you can safely pull the power plug at any instant. That's one of its key design features.

Reply to
Andrew Gabriel

I also updated my tests doc to include a case of raidz1 and raidz2 where the *whole* disk is given to ZFS.

formatting link

It's a little faster but there's not much in it. I'm changing my mind and I might use it after all - with MD RAID1 for the system itself.

I just need to practise "failing" a disc and resilvering.

Cheers for the heads up

Tim

Reply to
Tim Watts

Forgive me if you've answered this before but have you monitored RAM usage in the ZFS setups?

Reply to
Mark

Not yet - but the documents I read says it will use all of the spare ram and concede it when applications want it - ie just like any other well behaved cache.

Reply to
Tim Watts

I'm just keen to understand it's minimum RAM requirements. It is noted for heavy memory use.

Reply to
Mark

You tell it how much memory to use. However, on Solaris it defaults to assuming it's running on a fileserver and uses most of the memory (otherwise you're wasting your fileserver memory). I don't know what the default memory usage is on Linux or FreeBSD, but it might well be the same.

If you are running other apps on the system, then you will want to tell ZFS to use less memory. On my consolidated home system (fileserver, desktop, etc), I tell it to use 1GB (of the 8GB memory).

ZFS will give up memory if the system becomes short of memory, but this takes some seconds, and you want to avoid this if you know the system is going to permanently need the memory for something else.

If you are running an application which does its own caching (and it's good at working out what to cache), then you will want to shrink the ZFS cache to probably no more than 1/4 of the cache size of the application (such as the SGA for an Oracle database), on the basis that the database is in a much better position to guess what's worth cacheing than any filesystem sitting underneath it can possibly be.

Reply to
Andrew Gabriel

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.