Discussion in 'Article Discussion' started by Sifter3000, 10 May 2010.
Differing manufacturers? What?
Are you on crack? All of these data drives are WD Caviar Green 2.0TB drives! Even the 320GB drive is a WD Scorpio Blue. Only the SSD is from OCZ.
The point is that if all the drives have come from a bad batch of drives, he's bu**ered. Chances are slim, but after all that time and effort...
I think what Gareth means is that by using drives from different manufacturers, you again minimise total array failure. Say that batch of WD 2TB greens had a bug - wouldn;t you be happy that half of your array was on entirely unlinked 2TB Seagates, or visa-versa?
Well, the chances of them failing at EXACTLY the same time seems far fetched. If one of the drives in my server would fail, I'd just replace it, and rebuild the RAID.
I cannot even begin to comprehend how you read that sentence. That was entirely my point - if there's a bad firmware or manufacturing error that affects the entire batch, the NAS is ruined. Splitting the drives across manufacturers would help prevent such an issue - it's SOP for RAID.
What about cooling? There doesn't appear to be a lot of space between the drives for air to flow.
Although I agree it's good practice to select drives from multiple vendors / batches as RAID-5 can only withstand the loss of 1 drive before data loss occurs even having 50% of the drives from the other manufacturer is going to lose all the data. Ideally the 8 drives would be from 8 vendors - now that would be a thing! And seeing as RAID isn't backup - he really needs to build a second one... Great project though.
Drive manufacturer argument aside, this is the kind of minimalist design that I LOVE! It's sad that no mainstream manufacturer can get it right. But bravo Will, very sleek, very clean, something I would be proud to stick next to a HTPC in my entertainment center.
I thought it was a bad idea to put HDD at any angle other than 0 or 90.
I'm impressed... very nice build to that.
Out of interest, whats it going to be running? Being a media PC (Windows?) or a file server (Linux?)
I read "or are you aghast that he hasn't spread his risk by using drives from differing manufacturers - or at least batches - for his array?" as "or are you aghast that he has increased the chances of drive failure by using drives from differing manufacturers - or at least batches - for his array?"
I was quite upset with a school related matter (My hatred of lessons may or may not have any relation with that), and quite a few people on YouTube happened to be commenting "Why is he using different drives?" so my frazzled mind was confused.. Apologies for that!
hmm.. I think raid5 might be pointless for this though.. I remember reading somewhere that the chance of an error on hdd's is about once in ever 12TB, but if one drive fails and you have an error when rebuilding the array....
Ah - that'd do it.
Heh! Nay worries - was concerned that I'd been writing even more gibberish than usual, if such a thing is possible.
The reason you use drives from different manufacturers and different batches is so that you really reduce your risk of simultaneous failures. While this isn't traditionally an issue, it was REALLY highlighted by the Seagate BSY issue. We had a client who spent quite a bit of money when he built a system to make sure that drive failures wouldn't **** him over, and wham, lookit that.
Well, the sample URE (see the detailed drive specs) in that semi-famous ZDNet article was given as 10^14, meaning that for every 12TB of data, statistically you would have at least one Unrecoverable Read Error. The theory being that if you lose a disk and try to rebuild the array, you're certain to hit an error during the rebuild.
For URE's of 10^15 (quite possible with Enterprise-grade disks) the chances are lessened, significantly. Well, as long as the manufacturers are correct.... RAID-6 doesn't make it much better.
My personal preference would be to keep two smaller arrays and concatenate them (not stripe, i.e. RAID 50) so that one disk failure has more chance of rebuilding, two disk failures in either array is passable and if two fail in one array (very unlucky) then at least you haven't lost all of your data. Just the end of the file system, but that's still more recoverable than striped data.
That looks pretty awesome
How did he manage 266MB/sec read on GigE though? Surely it's limited to 125MB/sec?
I was thinking the same thing, at least have a quarter of an inch for some airflow. That would worry me.
I would assume thats raw drive read rather than over network throughput
But that's rubbish for 12 drives! Three Samsung F1s in RAID0 would be faster!
Separate names with a comma.