Build Advice Which SSD?

Discussion in 'Hardware' started by Buzzons, 22 Jun 2011.

  1. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,025
    Likes Received:
    31
    So..

    Looking over thessdreview.com there're loads of SSDs they give "Editors Choice" to. My question is this, which SSD should I go for?

    I'll be needing 2, and they'll be plugged into one of these LSI 9265-8i cards. They'll be running in RAID0 and each should be 128GB-256GB in size.

    They'll be used in a situation where there's huge IO (large files being written and read constantly - smallest files being written will be ~3MB, largest will be ~50MB) - both in and out at the same time.

    So what should I get?
     
  2. thetrashcanman

    thetrashcanman Angel headed hipsters

    Joined:
    18 Nov 2010
    Posts:
    2,716
    Likes Received:
    76
    I would say a vertex 3 128Gb, but I'm not entirely sure if that is suitable for your needs, as although its one of the fastest ssd's on the market, of this generation its speed is only usable with compressible data, so my question to you would be are the things you will be writing and reading from and to the drives be compressible?
     
  3. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,025
    Likes Received:
    31
    Not really no. They'll be binary blobs so I'd say totally uncompress-able. Would that really have much of an impact? The LAN it'll be on is only 1Gigabit so as long as it just has to keep above ~120MB/s read/write in RAID0 at all times.
     
  4. outlawaol

    outlawaol Geeked since 1982

    Joined:
    18 Jul 2007
    Posts:
    1,935
    Likes Received:
    65
    Get a crucial M4 128gb. I wouldnt bother with OCZ stuff, they have horrendous support. A C300 would be ideal as well, but the M4 has better read/write speeds.
     
  5. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Unless the data's actually compressed (ie rar, zip, etc files or compressed media mp3, ra, mkv, mov, etc) then the data's not going to be totally incompressible...

    ...though will vary from file to file.


    As to the speed of the 6Gb/s SFs, whilst it's true that with 100% incompressible data they dramatically lose speed (& similarly there's very little data which is completely compressible using the algorithms), neither reflect the majority of real life data...

    ...well, a Win7 & Office install is more than 50% compressible by a 1st gen SF controller & that's has a mix of highly compressible text files, averagely compressible data files & some very incompressible media/picture files...

    ...but with either the 120GB V3 "max iops" or the 240GB (standard) V3 (obviously the 240GB "max iops" is even faster, but the standard one has no competition at all on speeds & it's not quite availble yet) then they outpace the competition when you look at most real data rather than either extreme.


    The other thing that has to be considered (& probably one of the key ones for your needs) is that, whilst all SSDs need idle time to maintain performance, non-SF SSDs are less robust in non-trim environments - certainly something like the Crucial's need a significant amount of extra time to recover - whereas the new SFs are even more robust than the previous gen & the 'worn in state' that could affect them is now resolved by a reboot...


    Now, with all of that there are obviously 2 different V3s that i've suggested - but 'if' you are 100% sure that the data is incredibly incompressible you could also look at a pair of the 128GB intel 510s...

    Whilst not quite as robust as the V3s in R0, they are probably the best of the alternatives on that score & trade places with with the standard 120GB V3 on the various b/ms.


    So, that should give you enough info to make a decent choice there, but if there's anything unclear just ask. :)


    [Edit]

    The one thing i've should have queried is the actual amount of data being written - simply that 'if' it's a huge amount then consumer SSDs will not survive unless you look at a much more frequent update rate.

    Okay, the SFs do have an advantage here by being able to compress data, but you may have to either look at enterprise models (either eMLC or SLC) or, since the cost of those may be slightly prohibitive, instead look at a larger array of (small but fast) SAS enterprise HDDs.


    [Edit 2]

    Oh & what's the reason for the 9265-8i?

    Okay, it's got higher iops than the 9260 (though this isn't hugely relevant to what your stated usage is), but if you're only after 2 SSDs (or 2 SSDs + some HDDs) then you'd be fine with either the 9260-4i or 8i & save considerably...
     
    Last edited: 23 Jun 2011
  6. Slizza

    Slizza beautiful to demons

    Joined:
    23 Apr 2009
    Posts:
    1,738
    Likes Received:
    120
    the speed of the drives will rapidly decline in a raid 0 configuration if using them constantly. no trim support shall have a large impact. perhaps a single large drive would be best?
     
  7. IvanIvanovich

    IvanIvanovich будет глотать вашу душу.

    Joined:
    31 Aug 2008
    Posts:
    4,870
    Likes Received:
    252
    If it was me I would be looking at the ibis drives. You'll get all of the performance, without the hassle of raid. It would also be a cheaper solution.
     
  8. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    No this is not strictly true 'if' the OP buys SSDs which do not rely on trim - such as the Sandforce ones - & is the *one* of the key reasons why i own 4x V2s rather than 4x C300s... ...where my speeds have not "rapidly (or otherwise) declined" as a result of them being in a non-trim (R0) environment.


    As said though, there needs to be a reasonable amount of idle time based on the SSD(s) chosen & the amount of data being written & erased, though this is true whether in R0 or not, or in a trim environment or not - the trim command can only be sent when there's no other commands running in the queue *&* trim without GC isn't great as processes like block consolidation & wear levelling & actually freeing up the blocks that trim has told the controller can be cleared up.


    in R0 (or other non-trim), the SFs will need less than the intel 510 which will need (much) less than something like the Crucials C300/M4 for the same r-e-w load in order to maintain speeds...

    ...so it's about choosing the best drives for the specific task - so suggesting that because a specific SSD can suffer greatly without trim then they all will simply isn't correct info.


    Even with the less robust drives though, since the up-to-date info from highpoint & Scan is that the (already shonky) 6Gb/s highpoint card (that half of everyone bought d.t. the 'expert advice' from bittech) does not actually support trim, means that large numbers of people have been running (esp) C300s without trim (so no different than having them in a raid array).

    Now the main reason why this hasn't had a "large impact" for most is down to people buying small SSDs d.t. price &, as a result, most of their ongoing r-e-ws post initial installation were having to be on HDDs instead of their OS/Apps SSDs - but the more r-e-w cycles that occured on the SSDs, the greater the effect.

    Hence, again, it's about the right SSD for the right job - & a less good SSD can suffice if the r-e-w cycle load is low enough to be suitable.


    & otherwise, this is 'a' reason why i've repeatedly promoted increasing the OP (by underpartitioning) as, amongst other things, this will make SSDs far more resilient to heavier r-e-w cycles; whether as a standalone SSD in a trim environment or in whatever non-trim one...

    Whilst this 'apparently' isn't needed with the V3s, regular testers on the OCZ forum have been doing so still & appear to have more consistent speeds with heavier than normal loads.


    Like the various revodrives & whatnot, the ibis uses a raid chip (it's effectively 4 V2s inside a single case - though with somewhat differing b/m results), so...

    ...well, it shows how robust the (last gen) SFs are in non-trim environments as there (obviously) is no trim for it (see for example this)...

    ...but the V3s are even better.

    [Edit]

    Forgot last night to mention that probably the best solution to work around the trim issue (if it's felt that it's needed) will be OCZ's VCA (Virtualized Controller Architecture) enabled drives that were 'shown off' at Computex this year.

    Basically they practically act as raid (re performance/leveraging multiple controllers) without actually being raid (so have trim & whatnot)...

    The first 3 products appear to be (for speed) the (ultra pricey) Z-Drive R4 & the RevoDrive x3... ...& the slightly unusual (as it's not 'that' fast - but does come in a 960GB version) Talos SAS SSD...

    Though i'd imagine that other versions won't be far behind (esp in the enterprise market - though there will be SATA versions in the fullness of time) - the ibis2 (whatever they call the 2nd gen HDSL interface SSDs) will certainly use this tech...

    Well, as the orig ibis was basically a RevoDrive x2 in a different format, with the added ability to have numerous ones connected to a single controller - then it's common sense.
     
    Last edited: 23 Jun 2011
  9. Buzzons

    Buzzons Minimodder

    Joined:
    21 Jul 2005
    Posts:
    3,025
    Likes Received:
    31
    soo.. couple of things

    1) 9265-8i because in the long term we may need to branch out to have more than 2 SSDs for the data, we're not quite sure how well they'll perform so if they suck we'd throw 2 more at it etc. It's a lab box (not production) so we generally just wanna make sure it all works well. (also already purchased)

    2) Data being sent is raw mpeg files basically and as such wont compress

    3) Didn't really want to go down the PCIe route as the potential production environment these will end up in are 1U servers with 2 SSDs (if it works with 2) and as such can't really drop a PCIe card into them

    4) We've had an array of 16 SAS 15K RPM disks running this stuff before but we're looking to move to SSD for space/speed

    5) The disks will be idle for ~30-40% of each day, would that be enough to recover if we used non trim devices?

    6) So what's the best option for TRIM over raid0 / Non Trim devices?

    Think that's every question answered?
     
  10. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    1. The limitation isn't going to come from the SSDs or the card but the network connection - whilst the sequential specs of the 9265 are much faster than those of the 9260 (with enough devices), if you can't actually access those speeds then it's meaningless. if you were looking at something like a database (where the iops would matter far more) then the story might be slightly different but you're not so... Your call though.

    2. RAW mpegs won't compress? Unless every frame were to be completely different & the pixels completely random & unrelated to each other then i can't quite see why they'd be completely incompressible.

    Whilst it's going to be a different algorithm to the SF's, simply as a guide, what happens if you use WinRAR/WinZip/whatever to compress a short clip? (if they were completely incompressible you'd expect a slightly larger file size naturally)

    3. No problem - it's just options... Though you're obviously sticking a pcie card in with the lsi.

    4. Fair enough.

    5. Providing the SSDs are getting power (whether the server is running on full power or in a S1 sleep state) then there 'should' be more than enough time within the parameters you're suggesting - though obviously getting ones which are more resilient in non-trim will make this much more so.

    6. You *really* have to be looking towards the enterprise drive end based on the described usage - well raw mpegs are going to be 'slightly' on the large side...

    So, assuming the files are compressible, i'd be looking at either the eMLC or SLC V3 EX (they're borderline workstation/low end server solutions), or Deneva 2 C or R SSDs (proper enterprise solutions) - budget -> reliability, speed & longevity in roughly ascending order - though as i can't see any for sale it's probably one of those things where you'd need to contact OCZ Enterprise directly.

    if i'm wrong then you 'may' find a better option looking at something like an intel enterprise alternative - though i've not seen anything new & exciting about their enterprise products for a long while...

    ...but as, with consumer models, the SF controller is the most robust in non-Trim environments, you 'may' be losing one advantage to gain another.
     
  11. BustedTyre

    BustedTyre What's a Dremel?

    Joined:
    16 Jul 2011
    Posts:
    1
    Likes Received:
    0
    LSI 9265-8i

    1. 9265-8i with current firmware and drivers is so buggy it's of no use, especially for SSD-s. That is true with both Windows (it'd simply crash the server) and Linux (awfully hard to find working drivers and msm package.) I'd stay away for at least 6 months if not more.

    2. SSD-s are a necessity for heavy databases and VMware hypervisors where I/O is pretty much random. For video production, I suggest you get a 2U/24-bay Supermicro SC216 or equivalent, with 7 LP PCI-e x8 slots, non-expander backplane with 6x SFF8087 4-lane connectors, 24 2.5" 15K 147 or 300GB HDD and 6 (yes, six!) LSI 9260-4i (or 8i whichever you get cheaper) controllers. Spread the disks as 4 singles per controller and make a Linux mdadm RAID10,f2 set of 24 drives. Depending on your app, it might be as fast as 4-6 to even 12 gBytes/sec cached read/write and around 6000 random non-cached IOPS both ways. Unlike that of SSD-s, write performance of SAS HDD-s is very stable.

    Whichever LSI controllers you get, mind you: they overheat and may burn under heavy load. Set the fan speed in BIOS of your SuperMicro to maximum and replace the left-side standard 6K RPM fan with a monster 11,000 RPM Supermicro fan of the same size. The noise is awful, but six controllers would melt the back of your server if you don't.

    If you tranfer the files over the network, the bottleneck at these speeds would be the TCP/IP stack over slow 1Gb Ethernet. It won't be close to 100MB/sec, maybe half at best. Using Infiniband eliminates it, it's relatively cheap and much faster than even 10GbE. If you only have one or two initiators (workstations) and one target (file server with disks) you won't need an IB switch, which is expensive. Use 2-port QDR IB HCA in the server and 1-port cards in the workstations and connect them point-to-point. Each port is 40 Gbps full-duplex. Any Linux engineer can set up the server part, and the windows drivers/stack for the w/s is rather simple. Then it shows in your network connections as 40Gb WAN and works underneath Windows SMBFS or NFS, just as if it were TCP/IP over Ethernet.
     

Share This Page