Feb 4 2012
SSD RAID considered pointless
The Tech Report, one of the best hardware review sites I know of, has just completed a test of a number of different SSDs of differing capacities. One of the most surprising results?
A RAID0 setup with matched SSDs has performance on a par with or actually lower than a single mechanical hard-drive.
The results are available here, but one of the key graphs is:
… where the RAID configurations are all at the bottom of the chart.
This goes to prove yet again how much SSD performance is dependant on TRIM support – something that (at least) Intel driver-based RAID setups don’t currently offer.
It’s worth noting that if running Linux with ext4 on LVM (not md RAID) then both support ‘discard’ to issue TRIM commands even if the device is part of an array.
Mysticus
18th March 2012 @ 11:40 pm
I do not think the problem is TRIM related particularly. Seems (my personal opinion) the bottleneck is I/O capacity of the raid controllers in particular and driver implementation. My theory is more on I/O. In general, I/O ques max out 100-150 in in regular HDDs if i remember it correctly and ~250 for sas/scsi server hdds. Fundamentals of mechanical hdds are totally different compared to ssd, first being access delay (latency) and I/O difference… compare receiving 150-250 commands with regular hdd to 4000-5000 in min SSDs to 40000-50000 in better SSDs. Check to see pci/x model SSDs with on board controllers designed for SSD use. they can use 2/4 ssd pcbs and speed differences are immense compared to the chart above. look for example fusion IO models at high end or OCZ revodrives in mid budget. These things use regular SSDs in their pcb forms(no cases, but either plugin pcb or on board). they perform nearly ram speeds! ranging from 500-700MB/s to 1500-2000MB/s! with I/O rates of unimaginable! They can literally move DVD worth of data in 2-3 secs. So again, problem with SSD raid is underlies in implementation/design of RAID hardware itself. Not the SSDs.
Stuart
19th March 2012 @ 9:16 am
Bear in mind that the solid bars above represent results from when the SSDs/arrays were in a pristine state, whereas the hashed bars represent results from when the storage had been overwritten entirely. This is a worst-case scenario without TRIM – the controller has to expend a significant amount of effort re-writing and shuffling around junk blocks – conceptually equivalent to defragmenting a 100% full hard-drive.
Therefore I agree that there will hopefully be improvements to come in terms of controllers and their bandwidths – but in this case it’s the deltas as well as the absolute performance, and all of the SSD RAID configurations were within a hair’s breadth or were lower than hard-disc levels of performance. Given the dramatically higher cost and lower capacity of such a solution, it really shouldn’t be recommended.
It would be fascinating to see how an ext4/LVM2 SSD RAID performs to see if this issue can be overcome…