1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Storage 6Gb/s SAS Cards (HighPoint vs Areca)

Discussion in 'Hardware' started by LordLuciendar, 21 Feb 2012.

  1. LordLuciendar

    LordLuciendar meh.

    Joined:
    16 Sep 2007
    Posts:
    334
    Likes Received:
    5
    I think I've finally got my head wrapped around this, but I want to be sure. With 6Gb/s cards the cards offered by HighPoint Technologies appear to have all become what I would call a Host Based Adapter (HBA) where the processing is performed by the system processor and memory, augmented by a hardware chipset on the card. For example the RocketRAID 2760A or 2740 both appear not to have memory configuration, and neither support a battery backup unit (BBU). On the other hand, Areca have started to call all of their adapters HBA, but from their specifications they still seem to be full, dedicated, RAID cards with on board processing, memory, and support for a BBU. I am looking at the ARC-1882ix-16-4G card, but it is worth mentioning that my price for the RocketRAID 2740 is $450 and the ARC-1882ix-16-4G is $1300 and another $130 for the BBU.

    Does anyone have any experience with these cards or suggestions for which one to get? I have a 16 Bay Chassis accepting 4 SFF-8087 with a free PCIe 2.0 x16 slot.
     
  2. AstralWanderer

    AstralWanderer What's a Dremel?

    Joined:
    17 Apr 2009
    Posts:
    749
    Likes Received:
    34
    Highpoint cards do have compatibility problems with some motherboards - I had to return a RocketRAID 2740 since it refused to work on a Gigabyte GA-EX58-UD3R motherboard, and this has been reported by others.

    No comment about Areca - they're well out of my price range. However one factor is surely going to be what level of RAID you plan on using - 0 or 1 (and variations thereof) require less processing so aren't likely to benefit from on-board processing to the extent that, say, RAID 5 might. The type of storage (HD or SSD) is significant too, so more details on your planned setup may help others provide more detailed recommendations.

    Another option to consider may be the LSI 9265-8i MegaRAID - one review of it here.
     
  3. KayinBlack

    KayinBlack Unrepentant Savage

    Joined:
    2 Jul 2004
    Posts:
    5,913
    Likes Received:
    533
    +1 on LSi, I'm using a SRCSAS18E (rebadged 8408E) and it's great. It's SAS3 though.
     
  4. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    +1 on lsi as well (without spending a fortune on a top end Areca) - as per my sig, i'm using a 2960-8i for my V2 array & SAS HDDs.

    The 2965 is certainly better if you've got enough 6Gb/s SSDs to come close to maxing it out, but personally will be more than happy with my 2960 until SATA Express launches - whilst there's bound to be a shonky Marvell solution sooner, there will almost certainly be a delay before intel & amd have an on board version.


    The issue with compatibility, mentioned by AstralWander, can be a general one with (esp) older mobos & pcie raid (& pcie SSD) cards d.t. a bios memory issue...

    Basically (to the best of my understanding), on every mobo there is a quantity of bios memory & every storage controller (inc the onboard ones) uses some of it... ...& if there's not enough space left, a controller won't be operational.


    Now, whilst i've not tested the limitations of my P67 board, with my current X48 board (& the Rampage Extreme i had as the proper machine), it was possible to have both the onboard intel controller & an lsi card working together, but attempting to add an additional card (originally i had a 4i as well) to either one of them caused one or the other to not be detected - as did testing with a random 4 port SATA card that i had kicking around...

    ...although a 2 port esata card did work, & disabling the intel raid boot rom thingie allowed them both to work.
     
  5. Mark_Skeldon

    Mark_Skeldon What's a Dremel?

    Joined:
    5 Feb 2012
    Posts:
    132
    Likes Received:
    1
    +1 for LSI Logic here too :thumb:

    Sometimes getting the clip-on backup batteries off can be a bit hairy though! Watching the card physically bend as you try to remove it. Even harder when it's a double battery model with one either side :duh:
     
  6. LordLuciendar

    LordLuciendar meh.

    Joined:
    16 Sep 2007
    Posts:
    334
    Likes Received:
    5
    I've actually had a similar experience with a RocketRAID 1740 on an Intel DP35DP motherboard when attempting to use both Intel Matrix RAID and the HPT card caused the system to freeze during POST. The solution ended up being to configure the Intel RAID (4x320GB RAID 5) without the HPT card installed, then set it back to AHCI, install the HPT card, configure the HPT RAID (3x500GB RAID 5 and 1x200GB Boot Disk), and boot.

    I can't really justify it, but I am just not a fan of Intel or LSI cards, never have been. I even have a few SRCSAS18Es out in the field. I fell in love with HighPoint cards several years back when their cards were topping out performance benchmarks while having a significantly lower price. I have no experience with Areca, but I have heard rave reviews that they top the performance benchmarks.

    The setup is likely to be somewhere between 16-24 Hitachi 1TB 7K1000.D drives or 2TB or 3TB 7K3000 drives, if they chose to use the Areca card, with possible expansion to another 12 or 24 drives through the SFF-8088 card at some point in the future. I had originally considered SSD, but the price difference for any real amount of storage is just too significant. The issue this new server is intended to address is one of transfer speeds. This server houses 9-10TB of data (likely to expand by approximately 500GB per month) of which approximately 1TB is copied to the server from USB 3.0 externals, transferred to workstations over gigabit networking, modified, then copied back to the server, all daily. At least it would be daily if everyone wasn't constantly waiting for file transfers to complete. A few additional details, in that 1TB there are possibly 2-3 million files, and while they are on the workstations they are being converted between file types, indexed, and organized.

    Details on the new server:
    2x Intel Xeon E5-2620 or 2609, when available or 2x Xeon E5620
    ASUS Z9PE-D16, when available or Z8PE-D12
    8x8GB DDR3 RDIMM (or 6x for Westmere)
    Norco RPC-4216, 4220, or 4224 chassis
    800W Redundant Power Supply
    16-24 Hitachi 7K1000.D 1TB or 7K3000 2TB or 3TB
    HPT RocketU 1144A USB3 Card
    2, 4, or 6 teamed Intel Gigabit NICs

    Also, unrelated question, anybody have a 4U server and info on whether a 120mm or 92mm cooler will work on the processors. Considering Thermaltake Contac29 or Contac21.
     

Share This Page