Oct 17 2006
Mini-ITX Terabyte Storage Array
Back before everyone left Zeus, Vivek and I had an ongoing bet to see if either of us could find the components on eBay to build a terabyte storage array for less than £1000.
At about the same time, I saw a review of the Buffalo TeraStation – which looked like a great product, but was priced at around £800 at the time. With the falling cost of storage, it’s now available for just over £500. I had decided that I wanted a terabyte storage array of my own, and I wanted one that ran Linux so that I could add additional services and install my own programs.
After taking ages to actually decide what equipment I wanted and finally getting around to buying it (as Paul will attest!) I eventually took the plunge and ordered all of the components I’d need.
(The main delay was waiting for VIA to release their C7 processors… that’s my story, and I’m sticking with it ;))
In the end, I bought:
- 1 x VIA 1.2GHz EN12000EG Mini-ITX motherboard
- 1 x 1Gb PC4200 DDR2 DIMM
- 5 x Samsung SP2504C SATA hard-disc drives
- 1 x Supermicro AOC-SAT2-MV8 SATA PCI-X controller card
- 1 x Supermicro CSE-M35T1 5-bay SATA Hot-swap drive rack
- 1 x 1Gb High-speed Compact Flash card
- 1 x 180W low-noise 1U PSU
The EN12000 motherboard is very cool (literally) and runs entirely passively cooled (unlike the faster 1.5GHz version, which requires a fan). The SATA controller card (which has no RAID functionality – it purely presents up to 8 disks to the host OS) is Marvell based and, despite the driver for this being marked as “HIGHLY EXPERIMENTAL”(!) as of Linux 2.6.17, it’s worked perfectly for me – even though the motherboard has only a standard 32bit/33MHz PCI slot. The Samsung drives are very high quality (and I’ve always liked the 160Gb versions in my desktop), but were mainly chosen because of all of the drives available the Samsungs have the lower peak power usage. This is significant because, with five of them present, even a few watts extra could push a small, silent PSU over its limits. Finally, the Compact-Flash card plugs into a CF->IDE convertor – which is by far the least-hassle way to use them. Integrated CF-readers tend to be either unbootable, or only bootable via USB-emulation, which can cause complications once your OS of choice has started and is trying to work out where it’s root filesystem is…
Stuart’s Weblog » Blog Archives » Music management
1st December 2006 @ 1:46 am
[…] So, you’ve encoded all of your CDs (still illegal in the UK) onto your computer, or even a storage array, and you realise that the filenames don’t always match the track title. […]
Gary
12th March 2007 @ 10:21 am
I’m looking at doing something very similar with an old, old Mini-ITX MB. I few questions if you would.
1) How well does the SAT2-MV8 work in the PCI slot? Any idea how bad the story is with every disk access going over the same 32bit, 33Mhz bus?
2) Do you have these in a software RAID5 array? Does it work well?
3) I’d prefer to setup a minimal Linux boot on the CF, then have the rest on the software RAID. Any advice?
Many thanks
Stuart
13th March 2007 @ 12:29 am
The MV8 is fantastic, actually. It oops’d the kernel in 2.6.18 (which worried me – I had an old machine with a Promise controller in it which stopped working properly at one kernel revision, and never worked from that point on) but it’s absolutely fine again in 2.6.19, and a quick test with 2.6.20 looks good too.
I honestly don’t know what the performance impact of using the single slot is – but in all honesty, there isn’t much of an alternative with this format of motherboard!
In any case, the max. transfer rate over 32bit/33MHz PCI is, IIRC, 133Mb/s – and assuming that even out of cache the maximum transfer rate from a drive is 50Mb/s (despite the 150Mb/s limit of SATA) then there is definitely spare overhead… with real-world transfer rates from disc, I don’t think there’s a problem.
Certainly in terms of how it feels to use, the system is responsive and fast – but I’ve really only tested it under single-user load.
I have software RAID5 using mdadm over raw partitions. Many people would recommend probably recommend LVM as an alternative – but this is more complex to boot from and configure, and probably provides more flexibility than is strictly required. It does allow dynamic resizing of partitions (with a suitable filesystem – I strongly recommend XFS), though.
Note that if partitions on the boot-volume are set to type “fd”, then the kernel will automatically start the RAID volumes so that filesystems on them can be mounted on boot with no further configuration.
Obviously, the problem with RAID5 is that writes are expensive: For an N-disc array, a single write actually generates (N-2) reads and 2 writes in order to synchronise the parity stripe. The advantages are fast reads and data integrity – so you can still stream movies fast and be reasonably confident they’ll still be there tomorrow 🙂
On my system, I have a 1Gb compact-flash card which is split into two boot partitions and two root partitions – each pair mirroring the other. The clever part is that by maintaining the dumb-as-a-sack-of-hammers DOS MBR on the card, I can switch between the active and backup partition-pair simply by changing the “bootable” flag, thus minimising writes to the card. I have a custom init script which is specified on the kernel command-line, which mirrors the 512Mb root partition into the system’s 1Gb RAM and then pivot_root(8)s to there. The compact-flash card is therefore never mounted read-write, except when updating the system software (which can be updated on one pair only, and then the boot-flags updated, so that if anything has broken recovery is simply a case of changing the flags back).
This has the advantage of working just as well whether the RAID array is connected or the board is being used standalone, without consuming the card’s write cycles. I specifically wanted the system to be usable independantly of having the RAID array attached – and if this wasn’t required, more of the system could be moved to the array to ease updates.
Overall, though, I’m very happy with the way everything pulls together.
I’ll write another blog entry detailing how the software works in detail and with download links to my files and diffs to get it all working shortly…
Gary
18th March 2007 @ 1:09 pm
Thanks for that. A follow up question or two if you don’t mind.
1) You say you are using XFS. One of the things I recognise is the issue of hard disk spin down and journalling file systems – they don’t since the FS insists on hitting the disk regularly. Any solution other than going for EXT2 which doesn’t?
2) I saw on a /. thread about partioning the disks (eg 100GB), RAID5ing the partitions across the disks, then combining the partitions to create one big logical disk again. Benefit was suppose to be the possibility of adding an extra disk without totally zeroing the array (thou tortuous). Have you done this, any thoughts? Buying some 500GB disks now, then bolting more on in future when really cheap appeals (8x SATA capability = 3.5Tb …)
3) Also looking at making this box do other tasks (web server, Text-to-speech, etc.) Any suggestions of a good distro which I can setup to do the NAS server role AND have a GUI that can be accessed over the internal network/etc., but not have numpty desktop apps?
Stuart
18th March 2007 @ 1:52 pm
I’ll probably cover much of this in detail when I finally get around to writing-up the software side of things, but briefly:
1) I’m not sure what makes you think that journalled filesystems have to constantly write to the disk: If there are no writes, there is no need to update the journal, and so the disks can spin down. One good point worth making here is to disable inode access-time updating – because having to perform a write every time a file is accessed will keep discs spinning (although this happens regardless of filesystem chosen).
2) Recent Linux 2.6 kernels have a feature (which is, admittedly, marked as Experimental) to non-destructively add disks to a RAID5 array. Note, however, that there are two solutions to this: I chose the ‘mdadm’-type RAID setup which ths option applies to. It allows multiple block-devices (e.g. partitions or whole disks: I chose whole disks) to be presented as a single RAID block device. This has the advantages of being easy to setup and understand, with relatively low overheads. The other alternative is to use LVM to create an aribtrary number of Logical Volumes. This is a much higher-level system whereby and constituent devices are divided into extents, and then these extents can be assigned to volumes – allowing the Logical Volumes to grow and shrink (with filesystem support – and I think only XFS supports this) as required. Should you wish, its possible to run LVM on RAID, or RAID on LVM… although whether these are wise or even make sense is an open question. Personally, for a headerless storage server where reliability is paramount, I decided that RAID/mdadm was probably the more robust approach, since I wouldn’t be needing the additional features offered by LVM.
3) I’ll cover distros in the software write-up, but I chose Gentoo. Since my system is based around a MiniITX board and C7 processor, the best way to get a decent level of performance out of it is to build the system from source (with “-Os” as a CFLAG: by optimising for size, there’s a greater chance of the generated code fitting within the CPUs tiny L2 cache, which is probably the most beneficial optimisation there is). Really the only two choices for this are Linux From Scratch (which lacks a package-manager, and is way to low-level for a storage appliance) and Gentoo. Using Gentoo I was able to build a fully working system including all development tools, compilers, and headers which fits onto a 512Mb Compact Flash partition. The advantage is, of course, that only the software you specifically choose is installed. Having said that, due to size constraints, I’d strongly advise not installing X and running interactive applications from a different machine. One option would be to install the core X libraries for applications to build against, but not enough of the infrastructure to display things locally. You can then run the necessary applications remotely from any other machine with a full X server (e.g. Mac OS X, Windows with cygwin, any other UNIX/Linux machine, etc.)
I would definitely say that any of the binary distributions would not be the best choice, though – too much gets pulled in in the name of compatibility, and code will run slower if not optimised for the low-powered VIA processors.
Gary
18th March 2007 @ 9:05 pm
Re: Disks not spinning down on journalling FS – I don’t know, but that is what I read via a Google search. There are even references to hacks via laptop setups to allow the disks to spin down. Apparently there is some regular cache coherency action that spins the disks up.
Re: Distros – I feared you’d say Gentoo. Ho hum, it will be a learning experience…