1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

SCSI RAID

Discussion in 'Hardware' started by djDEATH, 26 Sep 2008.

  1. djDEATH

    djDEATH Habari gani?

    Joined:
    23 Mar 2006
    Posts:
    434
    Likes Received:
    5
    I have come into possession of three 146GB HP SCSI disks. Wide Ultra320 to be exact. I dont know a hell of a lot about these, but i'm guessing that 10,000 RPM means some serious performance. And i have three of them. THat to me spells RAID 5.

    So, my question is, what do i need to get this going. I'm assuming some sort of PCI/PCI-e controller card (this is for my desktop, not a server) but which one, and how much do i need to spend to get it all going. I have seen soem controller cards for £50-60 and some for £500, and in all fairness i don't understand whats on offer at each price point.

    Lastly, am i going to see a performance increase having a RAID 5 array with these drives, and is it worth it?
     
  2. murtoz

    murtoz Busy procrastinating

    Joined:
    9 Apr 2008
    Posts:
    212
    Likes Received:
    8
    So the price difference works as follows:
    - cheap cards: host based raid. This means all the card has is a scsi controller, but no processor/memory for the parity calculations for the raid 5. Uses OS cpu time and memory
    - expensive cards: full blown h/w raid, with processor and (sometimes upgradeable) memory. Typically PCIe x8, or PCI-X, and you'll have trouble getting this on your desktop (have heard of issues where these cards don't work on desktop PCIe x16 slots).

    As for performance - this can vary widely. Depending on the raid card and the settings you could have good speeds, or it could be slower than a single drive. The main concern with SCSI raid is redundancy - these things are meant for servers and so this is typically more important than performance.
    Also, with raid 5, parity has to be calculated. This is an overhead and can easily mean that a 3-disk raid 5 is slower than a 2-disk raid 1! One way around that is raid controller cache (dimm on the controller) and write-back mode which means the controller confirms the write when the data hits the dimm on the controller, not when it hits the disk. Since dimms are so much faster, you get nice speeds out of it. Only problem is that this is unsafe (data disappears from dimm if power goes while system is working) unless you get a battery backup unit, and I know some controllers won't let you enable this mode without the battery backup unit.

    As for actual performance - I'm not sure. I do know of one comparison site (in dutch) where they do a lot of benchmarking. Might be worth a look (the tabel should make sense even though it's in dutch): http://tweakers.net/benchdb/test/11...rigine[Official]=1&origine[User]=1&bar_max=85

    So with 3 15k rpm disks on an LSI controller, they are getting sequential throughput of 100MB/s. This LSI card is a h/w raid (expensive) card, and may not go in your desktop. I guess this would be a bit slower with 10k rpm disks. So you don't gain very much compared to current fast sata disks, and the cost is prohibitive...
    I don't see any results for host-based raid controllers here, but I would suggest that is the way forward. Since these can use your main CPU and memory they can perform quite well. You'll have to do some digging though for actual results.
    Hope that all makes sense and is not too depressing...
     
  3. djDEATH

    djDEATH Habari gani?

    Joined:
    23 Mar 2006
    Posts:
    434
    Likes Received:
    5
    that makes good sense, and i really appreciate your lengthy reply.

    Looks like this may be either too expnsive to set up or just not enough of an improvement to warrant the money spent.

    I have a 1TB Samsung, and that is pretty fast, much faster than my old WD Caviar 250. I dont like the idea of using system CPU resources for RAID, but i'm pretty sure those drives will be faster, even without RAID, so i may look into just getting a JBOD raid controller, or maybe even just a SCSI host controller that i can merely plug them into.

    Anybody have any suggestions for a cheap(ish) 3 channel SCSI controller that i can get these drives into my system with? I have no PCI-ex16 slot free, but i do have two PCI-e x1 slots and a PCI-e x4 as well as 1 spare PCI.
     
  4. LordLuciendar

    LordLuciendar meh.

    Joined:
    16 Sep 2007
    Posts:
    334
    Likes Received:
    5
    This thread: looking for raid6 controller

    We may be talking about SAS instead of SCSI, but all the same general principles apply. Some vendors (Highpoint Technologies for example) do not sell SCSI controllers anymore. At this point where SCSI is being phased out for SAS, I would say pretty much any hardware controller you can get your hands on right now would be the best you can do. I personally am running an Adaptec, but they're never going to make new high end RAID with fast I/O processors for SCSI, as it's old technology. If I had to buy one it would be an Intel, LSI, or Adaptec controller, in that order of priority. Try to look for the smallest cache (because you are not in a multiple user environment, not being on a server, a large cache does you no good) and find the highest speed I/O processor (ideally an Intel IOP). All RAID cards will work physically in a desktop PCI or PCI-E bus, but there are two things to look out for: First, driver support, make sure the drivers are capable of running in your operating system. Second, make sure if you buy a PCI-X card that it will have enough room to extend beyond your standard PCI bus without hitting any onboard components, they will function with PCI 33MHz (with somewhat of a performance loss) but you must be able to fit the card in the computer to do it. To measure, imagine the PCI bus being 70% longer.

    Ed. When I say smallest cache, I mean like 64MB, not no cache at all, the performance for a single user environment usually has 16 or 32MB cache for very intense environments. Imagine for a RAID that your card has all the cache and the individual drive cache (1-32MB) is no longer as important. Also, when I talk about RAID performance, compare that cards 100MHz IOP to the top end HPT SAS cards at 1.2GHz, SCSI will get you some disk performance and also data redundancy far beyond a standard desktop drive, but even SATA is getting close to surpassing standard SCSI.
     
    Last edited: 27 Sep 2008
  5. Splynncryth

    Splynncryth 0x665E3FF6,0x46CC,...

    Joined:
    31 Dec 2002
    Posts:
    1,510
    Likes Received:
    18
    With the move to SAS, you should be able to find some second hand SCSI HW RAID controllers. But these will almost certainly be PCI. I'll second the LSI comment, they make god cards. They are usally expensive, but if you keep a watch, you can occasionally find then used for very good prices.
     
  6. murtoz

    murtoz Busy procrastinating

    Joined:
    9 Apr 2008
    Posts:
    212
    Likes Received:
    8
    With one exception: voltage! If you have 5V only PCI slots on your board, PCI-X will NOT work. You can tell if your slots are 3.3V or 5V by looking at the key (the little bit that separates the two indented connecting bits of the pci slot, that the card connector actually sits in). 3.3V slots have the key closest to the rear of your motherboard. This picture shows the difference well.

    <edit> Two more things:
    - normal pci slots have a max throughput of 132MByte/s, shared between all the devices on the same bus. PCIe x1 slots have a throughput of 200MByte/s, x4 800Mbyte/s.
    - SCSI drives come in two flavours: hot pluggable, and normal. I hope you have the normal kind otherwise you'll need a hot-swap backplane! If the drives have seperate molex 4-pin power, they're the normal kind. In that case you'll need a proper SCSI cable, 68-pin LVD with termination (although I think some drives have jumpers for termination too - check the specs for your drives).
    You can easily see if a cable is terminated as there's a plastic thing at one end of it. Connect the other end to the controller, and the drives on the connectors closest to the termination (these cables usually have 6 connectors for disks, although a quick google shows 2 and 4 connector versions too). If you have a dual channel controller you could get two cables, but bandwidth on a single channel (cable) is 320MByte/s so 3 disks should be fine on the same channel.
     
    Last edited: 29 Sep 2008
  7. LordLuciendar

    LordLuciendar meh.

    Joined:
    16 Sep 2007
    Posts:
    334
    Likes Received:
    5
    I don't know of any PCI-X cards today (aside from industrial use) that use exclusively 3.3V, if you look at cards like the Intel SRCU41L and LSI MegaRAID SCSI 320-1. Ok... I take that back about all PCI-X cards because LSI MegaRAID SCSI 320-2 clearly does not support backwards compatibility.

    Regardless, in a desktop configuration you're not going to pass the 133MBps (give or take depending on many many factors) limit, so I wouldn't bother with a high end non-dual-voltage card. I'de go with the two cards I listed as compatible above.
     

Share This Page