1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Build Advice Which RAID Card?

Discussion in 'Hardware' started by Buzzons, 25 Apr 2011.

  1. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,069
    Likes Received:
    41
    Soo...

    Real RAID cards -- I've always had 3Ware/LSI ones (3ware 9650SE ones in general). However, I'm wondering what everyone thinks of the other main manufactures cards? (Adaptec, Areca)

    I'm thinking of getting a LSI MegaRAID SAS/SATA 9265-8i but don't want to blow that kind of cash if there are better cards out there.

    Will be used for high Read/Write IO on a file server. so needs to be great at random and sequential IO.

    There will be 2 SSDs in RAID0 for those that need the speed (holding databases that aren't essential but need to be fast when they're being used -- (they will be backed up so please, no flaming regarding RAID0 etc) as well as 6 drives in RAID6 (3TB WD) for actual speed.
     
  2. bigkingfun

    bigkingfun Tinkering addict

    Joined:
    27 Jul 2008
    Posts:
    988
    Likes Received:
    59
    I have only had Adaptech cards and they have never given me any trouble.
    I am not sure which model would be suitable for you purpose though.

    Good luck!
     
  3. Bungletron

    Bungletron Minimodder

    Joined:
    25 May 2010
    Posts:
    1,171
    Likes Received:
    62
    I just got HighPoint RocketRaid 2680 from Scan, running a single RAID 1 array and JBOD, it is very quick, the software is excellent and keenly priced too, but no Raid 6 :(
     
  4. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,069
    Likes Received:
    41
    Thanks for the inputs guys. Guess I'll have to wait for some serious NAS guys lol
     
  5. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Whilst the 9265-8i is a damn good card, i'm not 100% convinced that the SSDs/HDDs you're proposing actually *need* that much power from a raid card... ...well, if you were looking at something like 8x highend 6Gb/s SSDs in a parity raid setup then this would 'kick ass' (esp with the additional fastpath key), but you're not.

    So just wondering if it'd be worth thinking about the 9260-8i instead, as it'd save you some serious money & i'm not convinced you'd notice any r.l. difference with the setup you've listed?

    i guess it also depends what your future upgrade plans are though - both in terms of drives & timing... Well, move on another ~12-18 months & LSi will (no doubt) have something even faster out.
     
  6. KayinBlack

    KayinBlack Unrepentant Savage

    Joined:
    2 Jul 2004
    Posts:
    5,913
    Likes Received:
    533
    I have an Intel SRCSAS18E I need to offload. Great card, but I never could get my SAS setup going. Maybe someday in the future...
     
  7. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,069
    Likes Received:
    41
    Pocket - true, but, I have a feeling that in a bout 2 months time the RAID0 2xSSDs will need to become 8xSSD raid0 and the other disks moved out to a diff box. Hence future proofing.

    Also the fastpath key thing, there's really no info about it, could you explain what it is etc? (and how you order the card *with* it)
     
  8. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Fair enough... Well, that makes the 9265-8i the card to go for.


    As to the other bit, LSi do 3 'add-ons' for their newer cards - a battery (which protects data in the cache), CacheCade (for using SSDs as a cache for HDDs on that or 2nd/3rd/... LSi card) & FastPath which increases the max IOPS for SSDs in Raid.

    A quick check &, on the 9265 then it takes them from 250,000 to 465,000.


    Now, obviously the battery's a battery but, whereas originally they were tiny physical dongles that plugged into the 9260s, they then sold the other 2 as licences which you typed into the MegaRaid software to enable (NB one licence only covers one card so if you've multiple cards you'd need multiple licences).

    Doing a quick search though, it looks that LSI00247 (the FastPath product code for the 9265) is again a small dongle (at least for the moment) since it's descibed in a few of the online shops as "LSI MegaRAID FastPath software physical key" - a very quick google shopping search in the UK & the cheapest price seems to be a pound or two over £100 (inc delivery).


    Yeah, it's all choice, but personally found with my 4xV2s on a 9260-8i that the FastPath key did make a difference...

    ...obviously the 9265 is faster without it that the 9260 is with it, but you're then looking at much faster SSDs & twice the number...

    Mmmmm... ...technology always looks a bit crap a year & a half down the line. :(


    Oh, & also have the battery for my 9260-8i, but god knows if it's ever actually done anything other than charge & test itself as i've better things to do than look at logs. ;)
     
  9. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,069
    Likes Received:
    41
    heh thanks for the info -- guess I'll work out how to order it with the physical key then. That's a huge increase in IOPS.

    Will only be running 1 card in the box so shoulnd't need to worry about cachecade - even though it sounds rather cool. Servers are all in a DC so battery probably isn't needed. If both phases of power go out to the rack i've got more issues than some data loss (like - the entire rack being offline!)

    I assume the dongle just attaches to the card then? And I may have to give the MegaRAID stuff a key? Is that all I do?

    Also, RAIDing SSDs -> TRIM doesn't work through it right? Is that a big issue still or have they fixed it?
     
  10. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Sorry - clearly had confused slightly...

    With the 9260 - originally dongles for either FP or CC that attatched to the card & then a licence code option was released as an alt method (entered in the MegaRaid software)

    With the 9265 - there appears currently to only be dongles - though maybe a code option will follow at a later date...

    [NB i've never price compared them as i'd have been a little miffed if the licence code had been much cheaper than the dongle i bought a couple of months earlier.]


    Yeah, the dongles are tiny things that plug onto a couple of pins on the card near to the bracket - think it's 3 on the 2965, whereas it was 2 on the 2960.


    Trim still doesn't work with raid, but if you choose the right SSDs then it's not exactly an issue... ...well, i've been running SSDs in R0 for 18 months or so without any issue.

    Well, it was a major reason for my going for the V2s rather than C300s (as the latter wasn't great with heavier writes in non-trim)...

    ...but with both (for example) the V3 & C400 then the GC has been improved... Afaik, the V3 still wins out in non-trim by a significant margin though.


    Oh & you also can't update the SSDs f/w in raid arrays - not a major hassle as (at least OCZ's) are non-destructive & simply means plugging them in as secondary drives on another machine & then replugging them in (in any order) back onto the card...

    ...since you've got a sever then there's (obviously) computers to hand to do this with & is dead easy.
     
  11. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,069
    Likes Received:
    41
    sweet - awesome. I'll make sure that I get the addin dongle to make sure I get the best output from the card. Was thinking of going with some of the new SSDs (Vertex 3 maybe)
     
  12. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,069
    Likes Received:
    41
    also -- as i'm going to be mainly storing large files on the arrays -- nothing really smaller than 5MB what kind of settings should i use when building the array?
     
  13. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Mmmmm... i've only really looked at my LSi from the perspective of optimising for a R0 OS/programs/etc drive...

    For that, on the 9260, the best settings were a 128kb stripe size - with no read ahead / write back (ideally needs a battery backup to prevent data loss - which you have) / direct IO...

    (oh & there's an SSD guard thing which specifically monitors for issues & 'apparently' will move data off a potentially failing drive - though i've never had this as an issue so how good it is i can't comment on)

    Whilst my belief is that the latter 3 are still the best settings for SSDs on the 9265 (though again my usage is different so i'd double check), the stripe size is dependent on the usage.


    So, re the stripe size, generally, if you were only storing very large files (100s of MBs/GBs) & wanted to optimise sequential access then you'd aim for as large a stripe size as possible...

    ...however, since a database is looking at only accessing tiny parts of the whole to retrieve the specific data requested, you then look at something much smaller.


    Afaik, although this really isn't my area of expertise so double check this, the usual rule for database use is that the stripe size should equal the data block size...

    ...however, because SSDs can only erase in blocks, depending upon how non-static your databases are & the level of free space, you 'may' need to consider accepting a hit on this theoretical optimisation in favour of making the stripe size to no smaller than the block size.


    Otherwise, whilst i think -

    1. all SSDs in non-trim environments need powered on idle time for GC,

    2. with the V3s, whilst they can also suffer from the same slowdown that the V2s had with excessive r-e-w cycles, it's been stated that this can be rectified by a simple reboot - so you may want to factor that in to your regime - &

    3. there are significant advantages to using something like Diskeeper's Hyperfast tech (if you can ignore the Scientology connection) with SSDs to automatically (without going OTT as the manual methods do/Perfect Disk does) consolidate data on SSDs with higher r-e-w cycles.

    [NB for home use, OCZ are happy recommending (DK, PD or the manual method - as the need to use it is very infrequent, if at all), but OCZ have been working with DK for a while on Hyperfast...]
     
  14. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,069
    Likes Received:
    41
    fair play -- was thinking both for the database as well as the large storage arrays behind it that it will use. I'd prefer knowing that the large disks are set up to be as fast as possible - as the database and other software will be writing large files to it a lot of the time.

    On that note -- a lot of the files will be written/read close to each other (eg one app creates a 10MB file and another picks it up off the storage and does stuff with it) -- this could be happening 10-15 times at once (10 apps creating files, 10 apps reading those files) does that count as sequential or random access?
     
  15. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Mmmmm...

    First off, it's a long time since i looked at servers so everything's working on old memory.


    Secondly, you need to remember that SSDs don't store data sequentially - unlike HDDs, it makes no odds to the speed if a file that takes up 3 blocks is in blocks 1, 2 & 3 or 999999, 27 & 20003 (obviously the numbers being abitary illustrations rather than representing addresses).


    Then, thirdly, am i correct in assuming that you're actually looking at two very different uses for the sever? - one as to provide a database & the other for (as a made up example), accessing audio files?

    Well, the actual set up would depend upon the priority that you assign to each of the two processes the actual IO demands of each of the two processes &, thus, how you setup both the server OS & the raid card.

    Now, obviously you wouldn't want any one of the workstations to be monopolising the server & so there will be a need to design to some extent around a smaller stripe size & lower QD, but it depends upon the actual usage.

    Assuming we're taking about a single array (once you have the 8 SSDs), 10 workstations & you had a very fast LAN speed -

    1. imagining that the 10MB files are only accessed at 1 per workstation per hour but the database was being accessed semi-constantly by all of the workstations, then you'd want to look at optimising it based upon the data block size of the database (with the data block being linked to the SSD block size depending on how static the database is).

    Well, you wouldn't want a huge stripe size which would be more likely to tend to lead to the same drives being accessed simulataineously.

    2. if the database was rarely accessed & the 10MB files were being accessed at several per workstation per hour then you'd be better aiming for a slightly larger stripe size so that there was greater optimisation of the 10MB files when they were needed.

    Accepting of course that there 'could' be times when 2 or more workstations could be competiting for data but that, on average, each workstation would get better performance.

    (&, separately, there could be significant advantages to spliting it into two arrays with differing strip sizes)

    3. & if they were both being accessed frequently/constantly then you'd then tend towards 1.


    & fourthly, one thing that does need better explaining when looking at highly non-static data is the nand block size as it's a compromise - or rather you need to make sure that what you're proposing to buy is up to the task.

    Well, depending on the actual SSD bought, the nand block size 'can' be anything up to 1024K which would obviously be a ridiculous size & have a very limited lifespan if used in a highly dynamic server (though okay for a normal consumer drive) - however the nand block of the enterprise versions are much lower - as an example for 3Xnm SLC nand these tend to be 64K.

    Now, not knowing what you're buying (contact the manufacturer), there 'could' need to be some kind of compromise between the stripe & nand block sizes - though assuming you're not looking at a consumer drive then...

    Well the V3 comes in (at least) 5 varieties (ish best to worst for your usage) - the EX (SLC), the Pro (either eMLC or MLC) & the consumer ("Max IOPS" & standard MLC) - & you'd need to be looking at the first one or two...

    [NB this isn't solely limited to OCZ/SandForce drives by any stretch - a consumer drive from any manufacturer simply won't be up to the task.]


    What i *should* then have written in my previous post is that, with SSDs the stripe size should be no more than a reasonable multiple of the data block size re highly accessed databases.

    My bad for not thinking this through.
     
  16. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,069
    Likes Received:
    41
    Think i'm going to do it so that there's 2 servers, both with 8 disks -> 1 for databases, 1 for files that are needed. Then a third, but unimportant one that just backs it all up - so just 16x3TB disks or the like -- speed not being important.

    So for the database -- small stripe size as it's accessing small data, for the fileserver large stripe size?

    I'll be going with the new OCZ Vortex 3 i think.

    How do you set the queue depth on these things as another question?
     
  17. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    With either (on SSD) - balance the stripe size with (a) the data block size (database)/increase for sequential (large files) (b) the SSDs nand block size & (c) & the need to stop any one workstation from monopolising...

    With HDDs, you simply ignore (b)

    &, depending upon whether you're looking at (i) realtime or (ii) offline (ie overnight when the office is empty) backup to the 3rd array, you either want to (i) treat it ish as a workstation & give it a lower stripe size or (ii) set it up to give a greater maximisation of sequential transfer speeds & a larger one.

    (basically what you've said, but they're a set of things to consider when setting the actual size)


    As said though, the V3s (Vertex 3s) come in different varieties - the (cheaper) consumer ones not being suitable for what you're proposing - so it's a balance between the (much more expensive) SLC EX version or the (somewhat cheaper but less robust - far more so than MLC though) eMLC Pro version.

    Similarly the servers really 'should' be using enterprise HDDs (you've not said), but the SSDs are liable to suffer far more as they have limited r-e-w cycles - hence why there's more robust server versions at 2 price points.


    Afaik, you can't set the QD on the card itself, so this would need to be done as a "virtual QD" (or similar jargon term) within the server OS - as this will either be a simple command or menu then it's possible to play with it to test irl without taking anything off-line.

    Obviously though, you need to treat the SSD & HDD volumes somewhat differently -

    The standard rule for HDDs, back in the day, was to limit the QD to 2x the no of disks, but, of course, tech improves (speeds increase & latency falls)...

    ...however there's a balance betwen small reads (where a much higher QD can help as there's an increase in burst speeds) & larger reads (where a lower QD helps as a higher one leads to a dramatic increase in latency which will slow the average speed down in multi-transactional situations).


    Then, with SSDs only arrays, whilst logically you might imagine that you should go for a much higher QD d.t. their decreased latency, there's actually no gain at all because the latency is so low so it's actually more beneficial to stick with a lower one.


    With writes (as these can normally be set separately) becasue the card comes with a write cache (& also with SSDs) then you don't need a high QD for either small or large ones - the network will actually become unresponsive with too high a QD d.t. it becoming saturated.


    What, of course, you're aiming for though is for both the bandwidth to approach becoming saturated by the workstations (& backup server) & for the max trasfer rate from each array, so it's about choosing appropriate stripe sizes based upon the actual uses & hardware, & then to look at the r.l. network usage with it all up & running to optimise it.


    [NB this is all from old memory so whilst it 'should' be correct, if anyone wants to correct me if they disagree then i've no hassle.]
     
  18. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,069
    Likes Received:
    41
    wow that's an insane amount of detail, thank you :) I'll prob poke you when the kit arrives if you don't mind? I'd hate to set it up slightly wrongly and have an impact on IO because of it
     
  19. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    No problem...

    it's just crossed my mind that you can almost certainly set the QD on the cards in the WebBios (the boot up & press some keys option), just have ignored most of it for myself with my 9260s (also have a 4i that i need to sling on eBay as i don't have the need for it anymore) as it's irrelevant for a workstation usage.


    Yeah, it's all a combination of preplanning - knowing what the general usage will be & setting the basic things up to suit (ie the stripe sizes) as they'd be a damn nusiance to redo d.t. sever downtime - & monitoring the performance of the actual usage to optimise once it's all live...

    ...whilst there's lots of money in it (or at least can be), i found it all to be incredibly soul destroying as work, & so now do (slightly) more fun things - hence my "this is all from old memory..." comment as i've tried desparately to remove it all from my mind/it's not 'working memory' anymore.


    What i'd suggest is that you now do slightly more focused research based upon the specific OS & database you're actually going to be using prior to everything arriving as you'll find specific forums for most of everything.

    The one thing (yet another one) to take onboard though is the basic settings on the LSi cards (re the "no read ahead / write back / direct IO..." - you can alter these live if you want to see the r.l. impact btw) run largely counter intuitively when compared to other vendors.

    Oh, & LSi's tech support are really very good if you get completely unstuck.


    Yeah, not saying that i won't help if i can, but that there's too much stuff re specifc settings in the different OSes & whatnot that i really don't know anymore.
     
  20. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    JFYi, whilst it's not looking at the V3s, this is a pretty reasonable write up of the 9265 with SSDs.

    One thing that's really apparent is that there are simply huge gains to be had from the SLC SSDs for small random r/ws...

    On the assumption that this holds for the V3s (& there's no reason at all not to imagine that this would be the case as the figures are from LSi for any SSDs; not OCZ or Crucial or whoever), you'd really need to balance the initial large increase in outlay for the V3 EX (SLC) -vs- the (cheaper) V3 Pro (eMLC) on the basis of how important random r/ws were to you.


    Otherwise, they are saying that what i'd recommended as base settings for the card (though i forgot the last one somehow) -

    "No Read Ahead, Direct I/O, Always Write Back, and Disk Cache Enabled"

    - are what you should be aiming for...

    ...so at least that bit's definitely correct. :)
     

Share This Page