Well, thanks to Andrew G and others who mentioned ZFS in the SATA disks thread.
I've been playing with it and - wow, it is impressive.
Installing on Debian is a breeze.
It seems perfectly happy on partitions because it would be insane to run / on it - primarily because few rescue USB images would cope.
So I went to my old favourite:
/dev/sd[abcd]1 -> md-raid1 -> ext2 -> /boot /dev/sd[abcd]2 -> md-raid1 -> ext4 -> / /dev/sd[abcd]3 -> md-raid1 -> SWAP
and /dev/sd[abcd]4 for ZFS as a RAIDZ1 setup.
After the manual step of making sure grub was installed on all 4 disks, I have "destroyed"[1] disk1, and readded it, then "destroyed" disk 2, repaired it and pulled another disk and just readded.
[1] Boot from rescue, zero 1GB of the ZFS partition then zero 1GB of the front of the disk.I must admit, the re-adding was a little weird:
zpool offline tank1 scsi-SATA_WDC_WD20EFRX-68_WD-WMC4M3224833-part4 zpool online tank1 scsi-SATA_WDC_WD20EFRX-68_WD-WMC4M3224833-part4 zpool scrub tank1
I was expecting zpool replace tank1 scsi-SATA_WDC_WD20EFRX-68_WD-WMC4M3224833-part4
but it did not like that.
However, I had rsync'd a load of (expendable) stuff onto it so a re-rsync showed me that the data had survived.
Very very cool.
Nice that I can divvy up lots of "filesystems", add flexible pool quotas to each and not in effect have lots of bits of wasted space all over the place like LVM.