DIY server: Good SATA HDDs

That's what nearline SATA disks are for.

This is a very real problem for SSD's (we've had customers with pairs of mirrored SSD's wear out within a couple of days of each other after

2 years use). Much less so with hard drives, as long as you don't run them past their expected lifetimes (I always have to replace due to needing more space before they get to 4 years old).

They have vastly different error handling.

A desktop drive is designed in the expectation it has the only copy of your data on it. When it gets a read error, it will try lots of things to try and get that data back, from simple rereading of the sector, through to reseeking from both ends of the disk, head wobble, and trying to stitch the sector back together from several different reads. That can take a long time, and if that fails, your operating system may ask the drive to try several more times to read the data, repeating the whole task over more times. It can be a minute or more before everything gives up trying to get your data back in the worst case.

OTOH, an Enterprise drive (including nearline drives) have the expectation that they're running in a RAID environment, and the last thing you want is for the drive to spend tens of seconds trying to get back a sector which the RAID can read in 4ms from another disk. So an Enterprise drive will "fail fast", to allow the RAID array as a whole to continue functioning without a long pause.

If you simply compare these two behaviours in a standalone fashion, then the desktop drive will appear to have a higher reliability, and the Enterprise drive to return more sector errors. But Enterprise reliability is more than just that of an individual disk - it's the reliability *and performance* of the RAID array as a whole, and ensuring that one faulty disk doesn't impact the whole array. If you use Enterprise disks in a desktop, they will seem to be unreliable, and if you use desktop disks in a RAID, the RAID will behave badly. They're each designed for specific use cases.

Reply to
Andrew Gabriel
Loading thread data ...

On Friday, 21 March 2014 15:03:24 UTC, D.M.Chapman wrote: > In article , Mark w rote: > >I guess all FS types have their drawbacks. Personally I consider Z FS >to be the best option for me. Indeed. ZFS is great, but like you say, n othing is perfect :-) We love ZFS and have many many TB of it around... it' s probably the biggest thing we are missing migrating from Solaris to RedHa t :-( Darren

Dont forget to use ECC memory with ZFS. I have heard stories of entire dri ves being corrupted by faulty memory durring a scrub.

Reply to
AlanC

I received the little HP Microserver (Gen 7) last Friday, so that's a +1 for ServersPlus.com.

Very impressed with the build quality - not too difficult to fit the RAM despite the compactness. They even thought to provide optical drive and HDD screws nicely screwed into a bracket inside the door rather than in a packet that will get lost - nice touch.

The offer is on until end of March. I have just filled out the cashback form.

4 2TB WD RED drives on order and a couple of fast USB sticks to play with USB booting so I can mess with ZFS at least for a bit.

Cheers

Tim

Reply to
Tim Watts

Though in this case you almost need to get the angle grinder out to remove the blanked out opening for that drive...

Dust getting inside the unit is a little problem. I'm using a cut bit of 'Universal Cooker Hood Grease Filter' fitted inside the door, doesn't seem to cut air flow much - but will periodically need cleaning itself.

This is currently a power frugal(ish) Linux desktop for me 8 hour day use. Well not as frugal as a raspberry-pi, but that would be going a bit far from my definition of 'desktop'.

Anyway, I found a cheap (£18) Sparkle GeForce 210 from CCL does very nice HD 1080P and plays nice with the cinnamon desktop in Linux Mint.

And with a single hard drive, pulling 40W of electric peak. Previous Windows 7 P4 desktop (c.2006) in disgrace - 140W.

And this HP/Mint combo is scaringly more responsive...

Reply to
Adrian C

En el artículo , D.M.Chapman escribió:

I had to make a choice of FS for a 48TB array recently running on CentOS

  1. ex2/3/4 were out (16TB max), so it came down to xfs, btrfs or zfs.

btrfs is still in beta, so I dismissed that. After a lot of research and considering that zfs is still in its infancy on Linux, needs a lot of memory to work well and wastes a lot of disk space, I plumped for xfs. Time will tell.

Reply to
Mike Tomlinson

En el artículo , Tim Watts escribió:

md-raid5 is just the way the physical volumes are configured, isn't it? You still need to lay a filesystem on top.

Reply to
Mike Tomlinson

En el artículo , Tim Watts escribió:

Suggestion: use the SATA connection in the top (optical) bay on the Microserver for the system disc and keep the four drives solely for the RAID. This is how I've done it on my Microserver. Or use the internal USB port and a memory stick.

I've always felt that when you RAID drives it's best to keep it simple; use the entire disk, don't partition. More of a hunch than based on hard evidence, and also following a bad experience with an HP StorageWorks NAS that configured the disks in the way you suggest.

Reply to
Mike Tomlinson

En el artículo , Andrew Gabriel escribió:

That mirrors my experience exactly. I run several hardware RAID chassis and normally use Hitachi enterprise-class drives. As an experiment I tried using Seagate desktop drives in one and the array regularly failed drives when they timed out following retries. After a reset, the array would come back up without issue with all drives present and there was no data loss.

The array controller software was sufficiently flexible to allow me to set the timeouts longer so not as to fail the drives immediately, but I still wished I'd fitted enterprise drives at the outset and have done so since.

Reply to
Mike Tomlinson

Well, yes. Don't forget LVM for slicing it all up.

These days my preferred filesystems are XFS and EXT4.

Reply to
Tim Watts

The problem is that your system disk is not non raid, which would be unacceptable for my use case.

I've never had a problem with the type with running partitions striped across the disks as mentioned - as this is "real" linux and I'm in control of the process and not at the mercy of any 3rd party toolset which might decide to throw a hissy fit.

I am experimenting with ZFS right now - it looks "fun" but I think I'm going to stick with RAID5/LVM/??FS.

Some observations about ZFS so far:

1) It's cute and seems to have some cool features; 2) It does not support O_DIRECT or libaio (because the former does not make sense, not sure about the latter) so I am a little concerned that some software that insists on one or the other (eg RDBMS) might not like it (indeed MySQL has raised this issue before though I think it does actually run, maybe with a config adjustment?)

I trying to get a sensible "fio" benchmark jobfile defined (that actually tests the filesystem without taking hours per run - if not, I'll go back to bonnie++) and I'll benchmark a few combinations.

Cheers

Tim

Reply to
Tim Watts

Good points. Maybe I'll reconsider my choice of ZFS, but I do like it's extra features for data integrity but my Microserver does not have much memory and I can't recall what the maximum it will take is but it only has two slots.

Reply to
Mark

I should have some benchmark data (fio and bonnie++) from various ZFS levels vs MD/LVM/XFS in about 2 days. I'll point you at a link then.

Reply to
Tim Watts

Thanks (in advance) :-)

Reply to
Mark

I am currently waiting for Linux RAID to finish resyncing itself.

Disk benchmarks are a nightmare. I have an fio jobfile that shows massive differences between various systems (eg my work ESX cluster is

*fast* and so's linode, but my old home server is *slooow*) - However the actual numbers don't really mean anything. bonnie++ numbers are a little more inspiring but may be less useful in other ways.

I have measurements from my laptop (hdd and ssd), MiniITX server with 1 ssd, old server with good mobo (good 6 years ago) and 4 decent SATA disks in RAID5 as well as a very good VMWare ESX VM, and a linode VM.

When I have the rest of the tests from the HP Microserver for various disk layouts, I'll past the essential numbers into a google-spreadsheet and tie it to my blog and post that here.

Reply to
Tim Watts

I have 8GB in mine, which is the official maximum. ServersPlus used to offer a special version which had 16GB.

Reply to
djc

For home use, 4GB would be fine (at least in a Solaris-derived OS, such as Oracle Solaris, OpenSolaris, Illumos) - that's already way more than the working set size of home applications. I limit the ZFS cache to 1GB for just a single home user.

I don't know about when running ZFS on Linux.

For commercial users, we typically run at 256GB, but that's with many hundreds of users each using the filesystem with much heavier filesystem loads than any home user. I doubt many of you need four 10Gbit ethernet links into your home fileservers ;-)

Reply to
Andrew Gabriel

My server never gives me any issues at 512MB. The desktop however is maxed out at 4GB and one reason to think of upgrading is simply to get more RAM..

Reply to
The Natural Philosopher

As promised here are the (interim) bonnie++ results:

formatting link

The command run was:

bonnie++ -u root -f -q -d /srv/nfs/test/

where the filesystem under test was mounted on /srv/nfs/test/

It's a weak test - only one run per setup as I don't have all year. The latency figures are a bit suspect but the throughput and RandomSeeks/second look plausible.

Like most benchmarks, it's best not to try to understand what the numbers mean, but to look at the relative performances.

What is notable is ZFS seems to outperform the equivalent MD/LVM2/XFS setup by a large (factor 2) margin - EXCEPT for RAID10 type setups when MD-RAID wins.

Hosts:

deb7test - work VMWare ESX VM running on 12 core Xeon with lots of RAM (unstressed) and a PS6500E iSCSI SAN in RAID10 mode. The VM had 1vCPU and 1GB RAM.

mothra - 1GB RAM Linode.com VM

gigan - 6 year old server, 4 SATA HDDs, AMD Athlon(tm) 64 X2 Dual Core Processor 5600+, 4GB RAM

shinybob - 1 year old miniITX Intel(R) Atom(TM) CPU N270 @ 1.60GHz, 2GB RAM, 1 SSD (Sandisk)

squidward - my Lenovo T410i Intel i3 laptop with 6GB RAM, one stock SATA HDD, plus one very fast SSD (Sandisk)

godzilla - HP Microserver N54L with 8GB RAM, 4x 2TB WD RED drives and 2 USB3 Sandisk flash sticks on USB2 ports.

Reply to
Tim Watts

Currently mine has a single stick of 1G so, obviously, I would have to replace this if I needed for than 2G total.

Reply to
Mark

--snip--

Thanks. I'll save that link for later :-)

Reply to
Mark

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.