Hi I need to configure my drives, what are your thoughts peeps? I've just picked up 6 * 1TB Samsung Spinpoints for my new Asus P6T Deluxe i7/285 rig. It'll all live non overclocked in an old school 1st gen CoolerMaster ATC. 2 drives in the standard 4 drive cage, and 4 in a hot swap caddy in the 5 1/4 bays. The main gear is in place, but only just ordered 4 of the drives + cage. When I get the new gear I'll configure everything in a config that will last a fair while (Last week has just been a burn in - I'm loving the i7 and the BFG 285 OC is great) Currently running Windows 7 64 & Vista Ultimate 64 dual boot, each with a 1TB drive to play with. Windows 7 will probably come off and go on a virtual as doesn't offer anything for me just yet. My normal backup regime has been to run a full drive image every once in a while and I have a half dozen USB drives for this purpose (4*240,1*400,1*500). I also keep at least two copies of all "key" data (pretty much just photos + music library) the total size of my key data is about 350gb. Rather than use external drives for primary backup I'll be using internal drives for speed and ease of use. Key files will be periodically copied to external drives for a copy that isn't inside the main rig. I'm thinking of putting Vista on Raid 0, as long as I've got a seperate full backup of the machine I can live with the higher MTBF. But is the speed benefit really worth it above a single spinpoint? I may configure 2 or 3 Raid 0 arrays on a pair of the drives, this was I can be sure the OS image will still fit on a single drive if need be I could configure the other 4 drives as a 4 way Raid 5, but I'm really nervous of using the Raid on the MB like this. Coping with a drive failure is one thing, but what's the point of having a system that makes an MB failure a frigging nightmare? I use Raid 5 arrays all the time in enterprise hardware, but then I've got people to migrate on hardware failures if needed. Anyone migrated an matrix raid 5 array from one MB to another? Across makes? What about Raid 10? Anyone really using it? And again, what about migration should the MB fail? Gives me the eeby-gee-bees. So current thinking is Raid 0 using 2 drives for OS + Installs + Primary location for everything. The other 4 drives will not go raid, just be used to keep backups on etc. Should anything fail, I'll then have drives that can be moved easily between rigs. Thoughts? Good kung-fu? Jen
wow, you are really paranoid here. There's a good article on RAID pros and cons on bit-tech, just search for it on the site search (not forum search). The general rule is that you will improve performance a lot on RAID0 in benchmarks, but have hardly any benefits from it in real life. It's up to you to decide if you want it because you can or because you will actually benefit from it, because some people actually do. One word of warning though: RAID5 on a motherboards controller is a big RAID1 and RAID0 work very well, but the parity calculations needed for RAID5 and 6 are pure hell, and need dedicated hardware: IE a RAID controller. My advice would be to just try RAID0 on two or three of the drives to have a laugh, see how much benefit you get, and then decide if it's a keeper or not. Security wise, i think it might be a good idea for you to use RAID 10, since you are going to use the other drives as backup anyway.
Personally, I wouldn't bother with RAID0. I'd just grab a half-decent hardware RAID card and stick the whole lot RAID5. Mega speed, plenty of storage space, and security.
I'm with Krikkit on this one... A RAID controller card is the way to go, set up in RAID 5 (or RAID 6 if you're REALLY paranoid). Along with better performance (takes the strain off the CPU and puts it onto the controller card, so you use less CPU cycles), you'll also not have to worry about your motherboard going bad and losing all of your data. The majority of the time, if your motherboard fails with a RAID setup, your data goes with it. Even if you get the exact same motherboard again, the chances of the hard drives working again without reformatting are very slim. If you have a dedicated RAID card, should your motherboard fail, just take the card out and put it into the new motherboard. Et viola!
I concur with using RAID5 (with a dedicated controller). Whatever you do make sure, that at no point will you need to install XP 32bit or lower, if your (virtual) disk size is larger than 2TB. You need an OS supporting GPT for disk sizes beyond 2TB. Otherwise you run the risk of corrupting the data on the drives. Only Windows Server 2003 (and possibly XP x64) and higher, as well as Linux, support this feature.
I must say i partly agree. Data security is best realised in a RAID5 setup, but you really will need a dedicated controller for that. My advice was based on the fact that raid controllers are expensive, and you may not even benefit that much. If you really have more then enough space, you could do RAID 10 instead and save yourself the purchase of a raid controller. So basically, +1 on the RAID5 advice. RAID10 will do the trick without buying a controller, but you'll have less usable space and no expandability.
Adding a controller in is not on the list, I've got a decent Adaptec 4 port SATA one floating around somewhere, but I just can't see any real world benefit for my needs. May give Raid 0 a go and see if it makes any difference for me, may run some real world timings before I install the new drives so I can do a before and after. Thanks guys.
Because then the parity calculations involved have to be done by your system's CPU, instead of a separate dedicated one that would do it in a hardware RAID card. Which basically means your entire system slows to a halt while the CPU figures out what to write to the disks - and that takes forever. Dedicated hardware RAID cards (good ones, anyway) offload that process. RAID 5/6 write performance will still suck (as the parity calculations still take forever), but you can actually use your computer while you're writing data. - Diosjenin -
Thing is what if you have a file server that just sits there and torrents and serves files? Who cars if the cpu works a bit to calculate what goes where? Is that a viable solution? Really only curious for my own reference because I am considering something similar to what the OP wants to do.
i am also curious to know the answer to this. i'm using a 2.2GHz athlon in my nas right now, that CPU power needs something to do.
Well, this really is not a bad thing. However, you lose the speed advantage that you were originally getting from RAID 5, and losing speed is not something that a lot of people here at Bit-Tech are prepared to accept If you use raid5 on a dedicated machine (basically a NAS) then you end up with RAID 5 being the most space-efficient way of achieving data redundancy. If that is what you are looking for, you are on the money, but beware that the whole array might actually be slower then a single drive. Parity calculations are THAT crippling. This is also why good RAOD cards are so expensive. They are basicallu a mini-system-on-a-card with a processor, RAM (sometimes up to 2GB) and even an own network interface sometimes.
So a RAID6 on a dedicated card would be faster than a software RAID6, even if the box that does the software RAID6 is basically a NAS that does nothing else?
Dude, you have no idea how much faster! Those cards are so expensive for a reason, they really do add value. RAID6 is just a paranoid version of RAID5 though. The only benefit it has over RAID5 is that you can lose not one, but TWO disks and still be OK. Oh, and when RAID5 is rebuilding after a disk failure,m you cannot keep using the data, RAID6 can. That's just a few hours though.
For one, that would "waste" a lot more drives. With six drives in that arrangement you would get the capacity of two, where with RAID 6 you would get the capacity of four.
Not on a dedicated system... Software RAID is the way to go on a NAS, because the CPU has all the grunt needed. I did a test on a dedicated FS (P3 666) I have (12*1TB drives), I got >450MB/s speeds. So not THAT crippling... And even when rebuilding the array, you can use the data on both RAID 5 and RAID 6 arrays... It's just more 'risky' on RAID 5, because IF you lose a second disk while rebuilding, you lose the lot... But what about the speed you ask? Well, the maximum you'll share is about 12MB/s (Gigabit LAN), so it's about 5 times slower then any regular HD anyway...