I have an Asus P6T with 4x 1.5TB drives in RAID 5. When I made this change from discrete drives I reinstalled my OS (theoretically it can be done without a clean reinstall but it wasn't worth the effort for me). The RAID 5 performance of the ICH10R Southbridge is excellent. Obviously it isn't as good as a hardware-RAID card but it is a lot better than most chipset RAIDs.
if the motherboard was set to IDE mode or SATA, windows most likely not boot if its then set to AHCI or RAID (AHCI to RAID mode i think works) there is an way to do it with out reloading but not checked how if cost is not an issue an SSD better option 256gb SSD m225 they out pace most raid setups, but space is an limitation, more Hdds gives you more data rate once you get past 3 hdds over one SSD but access times on SSDs are far better that no hdd can couch SSD raid i norm never recommend as for the most part its very pointless due to access speeds, IOPS and data rate of the SSD, RAID + 2 SSD's also means no TRIM at this time so it can end up slower then 1 or 2 SSDs on there own
I agree raid0 is worth it if you have a dedicated backup (always worth it anyway). When people say you'll lose all data if one drive fails, they mean that if the chance of a single drive failing is x, the chance of a two-disk raid0 array failing is x squared. It's still small, and often vastly over-emphasises, but it is larger than a single drive.
To be honest RAID 0 for HDDs isn't really worth it. First failure rate, though small does add up. If you raid SSDs which are super stable would be fine. Second noise. HDDs aren't loud but they produce vibrations and 4 of them might rock your case giving a low annoying hum all the time. Third price. Getting multiple indipendent disks is more expensive than getting 1 really big one. In my setup I have an 128GB SSD for windows, programs, and frequent games I play. Also two HDDs in RAID giving me storage for movies, photos and programs/games I don't play often or would only play once. Then I have a cheap but very large single hard drive to backup my raid.
a blue screen of death is giving me the feeling that one of my HD in my raid array is not working properly. Is there a way to pinpoint which one it is?
OMG that is hilarious To be honest if you are using anything less than enterprise level drives for RAID 0, might as well run the drives next to a magnet. Western Digital RAID edition drives have the lowest failure rate.
onthejazz, you need to install Intel Matrix Storage Manager. It's a utility that keeps track of your RAID and will tell you which drive threw up an error (you can then accept the error and it will rebuild the array if you're using RAID 5). Here's a screen capture of mine today: In this case my port 0 HDD encountered a read error. Being a consumer drive it went to great lengths to try to recover the error (including resetting itself), which caused the RAID controller to think that the drive was failing. This happens about once a week for me as my PC uploads about 0.5TB daily onto the WAN so my drives get a good going over. Everything still functions but I take a massive performance hit while the rebuild is in progress. Because of all this I'm in the process of migrating my array (once I remove the matrix RAID as only Intel chipsets support it) to a hardware RAID controller, which should be able to avoid this issue. A common misconception is that consumer drives fail more often than enterprise drives. This is not the case. It is simply that enterprise drives ignore read errors as they expect to be connected to a RAID controller, which will solve them. For an excellent read on HDD failure, everyone should look at a recently released study by Google. They worked from a sample of over 100,000 drives. The PDF can be found here: http://static.googleusercontent.com/external_content/untrusted_dlcp/labs.google.com/en//papers/disk_failures.pdf
I use RAID 5 and its fine apart from if I need to do a hard restart as the software has to chech the drive which means 100% usage for 4 hours
Wut?? This is not normal despite heavy use!!! I use consumer grade drives in both my servers, and I back up both my rig, and my wife's to the server (RAID5) every night (0.6TB), and then that is mirrored to the other server each night (also RAID5). I've never had to do this (apart from a proper drive failure once). So my RAID5 arrays are reading AND writing as much data as yours are each night. It's flawless. Seriously... even with consumer grade drives... it should be a rare occurrence. Something is very wrong there. Either your drives are suspect, or chipset RAID is worse than I thought it was.
Pookeyhead, according to your signature you appear to be using proper hardware RAID cards? I realise that I have an unusually high frequency of rebuilds. I believe this is either the firmware (chipset) based RAID not handling the parity information properly when the drive encounters an error (because the RAID 0 array that is also on those drives never has problems) or the chipset not properly dealing with the two arrays on the same set of drives. Either way it is my thought that my newly aquired hardware RAID card should solve these issues. As well as doubling my available SATA ports to 16 =D so now I can see how many HDDs I can fit in a miditower case...
The latter I reckon. I do use a hardware RAID card, yes, but there is no provision to set it to wait for drive errors, and the documentation warns of the problems of non server grade drives, so I can only assume it's vulnerable to the same driver recovery wait error as yours. I still get no problems. Wait and see what happens when you install the RAID card.