1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Storage Why is the M4 so special?

Discussion in 'Hardware' started by Kernel, 29 Nov 2011.

  1. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139

    Well, it doesn't actually say that it's GC is as good as trim - simply that -

    "(it has) built-in advanced background garbage collection. It automatically uses idle time to optimize data storage, so that disk writes are consistently fast. Since this technology is built-in, you'll get strong performance with Mac® OS X® and Linux as well."

    &

    "Multi-drive RAID configurations typically don't support TRIM. The built-in advanced garbage collection solves this problem and makes Performance Pro Series solid-state drives a great choice for RAID arrays."

    - which could be equally said about any SSD with GC...

    ...& i would bet large sums that you could back the SSD into a corner in non-trim with sufficiently high random data (based on the amount of free space) & insufficient idle time such that it the writes did suffer significantly - in the same way as happens with the M4 & C300 &...

    (the SFs generally retain write speed but lose read speed in the same scenario)


    You then do have to remember that all SSD warranties are based upon not exceeding a specific no of r-e-w cycles - if you go over them then there is no warranty at all, which is where longevity comes in 'if' you want to be writing to the things heavily.

    (also remembering that wear levelling & block combining & whatnot as part of GC uses up cycles)


    As to the worn flash cells becoming unreadable, my understanding is that what you've suggested isn't quite what's supposed to happen.

    Well, with current MLC, the gaps between the different voltages (known as the read margins) between the 4 possible states (00, 01, 10 & 11) narrows over time - in theory, as you say, to the point where the data would be unreadable - esp if left unpowered for a period.

    However to the best of my knowledge, there's supposed to be 2 methods in use to work around this to provide a more sensible minimum data retention time without power.

    (this is part of the JESD218 standard - which is what's used as the basis for rating endurance... ...though it's normally when a fraction of a percentage of the nand cells has become read only rather than all of the nand)

    1. though afaik it's old hat, you limit the max no of writes to a cell to a fixed quantity that's less than or equal to the expected cell cycle rate - making each cell read once it reaches that point.

    2. or the SSD looks at the read margins & makes cells read only when the gap narrows to a certain point.

    in both cases, this allows a far longer data retention than if the SSD controller were to just keep on using the nand until a literal breaking point - which is what you suggestion would allow to happen.

    [Edit]

    Plus, as a part of GC, wear levelling helps to maintain endurance.


    Looking up the specifics of the data retention for JESD218 (page 7), a client SSD needs to retain data for 1 year @ 30C with an Uncorrectable Bit Error Rate (UBER) of <=10^-15.


    This in mind, something i'd forgotten to include about the SFs is that their raise technology takes the UBER to ~10^-29 vs ~10^-15 for most other controllers... ...& can recover much larger quantities of data than standard ECC.

    i guess this is 'a' reason why LSi have bought out SF... ...& with LSi's in house testing being pretty damn good, i very much doubt that there'd be another repeat of the BSOD issues with the next SFs.

    i had seen this & potentially it's very interesting - though half the internet thought that a different version of IRST (or whatever the intel driver was called at the time) was giving this back in, if i remember correctly, March 2010 based on a similar document...

    ...it transpired that it simply allowed an SSD in non-raid to get trim commands if the controller was set to raid.
     
    Last edited: 30 Nov 2011
    adidan likes this.
  2. adidan

    adidan Guesswork is still work

    Joined:
    25 Mar 2009
    Posts:
    20,167
    Likes Received:
    5,967
    ^ It's too early for me to read all of the above but rep for writing so much so early in the morning. :)
     
  3. MSHunter

    MSHunter Minimodder

    Joined:
    24 Apr 2009
    Posts:
    2,467
    Likes Received:
    55
    Last edited: 30 Nov 2011
  4. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    lol... So much for trying to impart info... i should just randomly hit keys in response to someone's post every early morning when insomnia is striking. ;)

    [Edit]

    Going back to my last post, as stated, wear levelling will use up cycles & it also helps to maintain endurance (both in terms of data retention & otherwise) may sound completely contrary, but aren't exactly...

    Naturally it's all about balance - as you don't want data being constantly shifted to use up cycles for no good reason - but -

    (a) if data remained static until it was deleted, you would be only be using the cells in the free space + OP over & over again -

    so, say you had 20GB in FS+OP, the cells were rated to 3,000 cycles & you had a combined write amplification + block combination of 10 (which i think is very much on the low side for a non-SF used as an OS drive with all the user files & whatnot on there - though we need a figure to work with) - you would only be able to write 6000GB...

    ...whereas, by wear levelling, although you increase the cycles by moving static data, the SSD can then balance the dynamic data cycles across the SSD.


    (b) similarly, by moving data to less used cells, there will have been less narrowing of the read margins on the cells that then contain data...

    ...but also, there is a 'disturb' effect that can occur when writing (esp random) data to nand physically near to nand that contains data - & by moving data around en masse, this can correct for that before it becomes a problem.

    [NB wear levelling is the basic way of doing this, but there are more aggressive techniques - such as the 'background data refresh' bit of intel's HET on, for example, the 710 SSDs - which is basically "wear levelling v2.0".]


    i'm not 100% that this is a complete explanation, as VCA 2.0 is also available with the (enterprise) Talos/Talos2 SSDs which are 3.5" & 2.5" (respectively) SSDs...

    ...the theory being (as i've not seen any test data on the Taloses in raid so...) that you can use VCA 2.0 enabled SSDs on any raid controller... ...&, along with allowing trim commands, it also means that a host of extra ATA & SCSi commands (predominantly for reliability afaik) to be passed to & from the drives that aren't available to other SSDs atm.


    Similarly, you do have other pcie SSDs which will also give quite foolishly high speeds but, afaik, VCA is the only tech atm that allows these 'extras' & was developed by OCZ themselves...

    ...so, unless they've licensed it out to competitors, it's not an explanation for the 'headline' speeds (as you could get them with, what, 2x 120GB max iops SSDs in R0 on a half decent controller & have double the capacity), but for improving the maintainance of speeds + giving more available commands.
     
    Last edited: 30 Nov 2011
  5. adidan

    adidan Guesswork is still work

    Joined:
    25 Mar 2009
    Posts:
    20,167
    Likes Received:
    5,967
    Lol, sorry, but it really was too early this morning to read that.

    Good stuff though. :thumb:

    At 5am my body is making the coffee while my head is still trying to get out of the bed and join me. :D
     
  6. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Wuss... You just don't care enough about random tech discussions do you...?

    Your excuses count for nought with me... ;)
     
  7. jtek

    jtek What's a Dremel?

    Joined:
    17 Feb 2009
    Posts:
    25
    Likes Received:
    0
    Not all GC is created equal, that's the point.

    Some such as on the M4 is so poorly implemented it won't work as long as the drive has been sent a command by the controller recently, so it never runs while the PC is booted, and depending on the controller may never run at all.

    To me what they are saying does seem to imply that their GC implementation is on par with trim: 'The built-in advanced garbage collection solves this problem' the problem being lack of trim.

    Of course it may not solve the problem as well as Trim but it has to be close for them to make that claim. If I get a pair in Raid and my writes halve over a few weeks like they would on the M4, the product has been missold and I'd return it as faulty!

    Also, is there any reason why the drive has to be completely idle for GC to run? Maybe the corsair gets around that and runs it unless the drive is very busy.
    Not in the context of a desktop environment though, which is what matters. You will always have lots of idle time. The Corsair is supposed to have good enough GC that it can do it under a typical desktop scenario, whereas the M4 has been shown not to. I'd like to see it tested mind you.
    I thought if it says 3 years that's what it means? Have you seen a reference to this in the terms? Worrying if true.
    So the failure mode in both cases would be that the SSD shrinks in size? Or is it too stupid to do that, and simply corrupts data as it writes?

    I've often wondered why it is that failing harddisks don't simply shrink their size. Once they run out of re-allocatable sectors, they should map bad sectors to empty ones on the drive. Instead they just stay bad and corrupt data.
    Which would be very annoying for someone about to buy an SSD now. Anyone got a prediction on release date for new SF?
    Do I trust them enough to go and buy a pair of SSDs now that will degrade horrendously and hope they release it? 'Sorry, that's exclusive to ivy bridge'.:eyebrow:
     
  8. driftingphil

    driftingphil What's a Dremel?

    Joined:
    17 May 2009
    Posts:
    52
    Likes Received:
    0
    I wouldn't touch Marvell SSD with a ... , after they messed up the sata 6gb sata controller on the my x58 board and turned it into a slower than my intel 3gb sata 2 port.

    sandforce all they way b...... .
     
  9. jtek

    jtek What's a Dremel?

    Joined:
    17 Feb 2009
    Posts:
    25
    Likes Received:
    0
    Well, apparently their SSD controllers are a lot better than their SATA controllers. Sandforce on the other hand, comes with free random BSODs. No thanks.


    While we have an SSD expert (PocketDemon) in the house, is it true to say that if I ran a pair of the Corsair Performance Pros in raid 0, I would be halving the write wear because each drive only gets half the writes?

    Also because of the higher storage space it should be less full and so it will last longer thanks to the wear levelling?


    If I could run the M4 in Raid 0, that would be my best setup. But I'm assuming even with the latest firmware the GC is still awful and the performance degradation would be really bad.
     
  10. debs3759

    debs3759 Was that a warranty I just broke?

    Joined:
    10 Oct 2011
    Posts:
    1,769
    Likes Received:
    92
    Sandforce had a problem with some earlier controllers and firmware, but that was solved a long time ago. People constantly giving out that sort of misinformation is why others don't always get the best they might.
     
  11. jtek

    jtek What's a Dremel?

    Joined:
    17 Feb 2009
    Posts:
    25
    Likes Received:
    0
    From what I'm reading, the problems are by no means solved. The Tech Report reviewed the new firmware and found lots of people still reporting problems on the OCZ forums after upgrading to it. It only solved some of the problems.

    They never managed to reproduce the problem in the first place, so user reports are the best we've got.

    What information do you have to suggest these problems have been solved? I hope they have because then I can go and buy one.
     
  12. debs3759

    debs3759 Was that a warranty I just broke?

    Joined:
    10 Oct 2011
    Posts:
    1,769
    Likes Received:
    92
    I'm not sure where I read that early problems have been solved, but I certainly haven't been hearing people with SF SSDs saying they are having problems. Only people who don't have them saying not to buy them.
     
  13. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Just trying to pick up a few points from jtek's posts...

    Yes, GC is not 100% comparable between manufacturers &/or (esp) controllers... ...though you will tend to find that there is much more difference between controllers than manufacturers.

    Then, certainly with the SFs, Corsair have never been known for being particularly proactive with f/w (at least compared to OCZ) so, whilst they've upped the write speeds for their ver of the M4, the idea that they've done something significantly different with the GC from the default M4 settings would surprise me hugely.


    GC is properly called "idle time garbage collection" - whilst any idle time between r/ws could allow some alterations to take place, without 'real' idle time then the effects will be minimal & could end up being counter productive.

    Now, if you really wanted to allow GC to be optimal then you'd either boot to the bios or log off from Windows (though some processes may interfere with the latter) in (usually) a S1 sleep state (well, S3 gives no power to the drive from independent volt meter testing) & leave it at that for a few hours...

    ...though this should only be necessary relatively infrequently unless you have a very high (esp small random) r-e-w cycle usage &/or are not in a trim environment &/or were to only turn the machine on & off to carry out very limited functions.

    Again, (esp) different controllers & situations will alter the need for this - so, for example, a heavily used M4 in non-trim will tend to need more idle time than an equivalently used 6Gb/s SF in non-trim.


    As nand has a specific rating, the warranties for all SSDs are based on the specific nand rating that you pay for.

    Whilst OCZ have been 100% clear on that in their forum for a couple of years, i cannot say how straight up other manufacturers have been about this, but it is standard practice - well, otherwise we could all thrash our SSDs to death doing foolish things & we'd bankrupt the manufacturers.

    [NB yet again, i have no affiliation with OCZ - i simply know stuff from their forum having bought SSDs from them... ...& as it was very informative (esp early on) about the tech.]

    Quick check &, for example, Crucial state about the M4 that it has a "Limited three-year warranty"... ...&, reading through their warranty terms, there are specific limitations - it "will be free from defects in materials and workmanship affecting form, fit and function" (which a nand cycle limitation rating provides a limitation to) & there is no warranty for re "misuse" & "abuse" (which thrashing a SSD would be).

    Or, as a much better example, there's point 4 of intel's warranty page here which explicitly states what i've referred to as applying to their SSDs.

    Even after a SE, the min, average & peak cycles used remain stored on all SSDs in the SMART data.


    As to nand failing, this is one of the usages for OP with SSDs - so the user area remains constant, but the OP diminishes.

    [i know, anecdotally, with an old indilinx SSD that had been very heavily used for testing, a mod on the OCZ board reflashed a 128GB V1 to a 64GB to work around this - the remaining nand that was still operational became a huge amount of OP (kind of akin to what i have by under partitioning my old V Turbos - though that's my choice rather than necessity).]

    What i was referring to though, with regard to either limiting to specific nand cycles (which again is, afaik, old hat) or to when the read margins narrow to a certain point, is not that the nand has 'failed', but that, in order to protect the data on it, those cells become read only...

    ...so you're kind of correct that the total capacity will shrink (again using the OP to replace nand), but only when the data is moved/deleted from those cells.

    [NB going back, i did try to get a answer from OCZ's engineering dept as to what happened when all of the standard OP had been used to replace failed cells, but unfortunately didn't get a reply.]


    New SF - probably properly announced in Feb-March 2012 & to buy in May-June...

    Across the board in 2012, i'd expect a shift to TLC (3 bits per cell) nand in order to significantly lower pricing from MLC - though this will come at the cost of longevity, so my prediction is that there will still, at least from the likes of intel & OCZ, be 'premium' MLC consumer SSDs... ...whereas the likes of Crucial will concentrate solely on lower cost.


    & i've quite happily been using 4x gen1 SFs in R0 without trim - that the SFs are much better in non-trim is the reason why i went for SFs over the C300.


    Again, if you're unlucky, the M4s come with random cold boot & freezing issues... ...neither that nor the BSOD issue that (similarly) a tiny fraction of a %age of people got with the SFs would be great - but you could return either for a refund if you had issues as they wouldn't be performing as sold.


    As to your next question, i'm not entirely sure if i'm reading it correctly, so there's 2 answers -

    1. Well, if you took either a single brand/model "banana" 100GB or 2x brand/model "banana" 50GB (in R0) SSDs & then wrote some quantity of data to either then there'd be no difference to the amount written...

    ...however (off the top of my head) -

    (a) because the data would be stripped between the 2 SSDs in R0, there is more chance of there being partially used blocks that may subsequently need combining &/or the data moving if other pages within the block with different data in is deleted.

    (b) particularly with non-SFs (though this does also apply to SFs - but they gain from significantly lower write amplification & compression, along with much faster 'on the fly' block consolidation), trim will reduce the random moving of data in dirty blocks by GC.

    (c) by having 2 SSDs because GC can act independently on each of them, it will be more efficient than on the single SSD - which improves endurance without having to artificially introduce as much idle time (if any).

    (d) there's probably some other things that i can't think of atm... ...just tired. :(

    Overall, there are pros & cons to both - &, of course, you need to take into account the controller's maintenance of speed without trim.


    2. if, instead, you literally meant either buying a single brand/model "banana" 100GB or 2x brand/model "banana" 100GB (in R0) SSDs then if you only stored & wrote the same amount of data to the R0 array as you would have done to the single 100GB SSD then the advantages, in terms of actual speeds, maintainance of speeds & increased longevity (esp if you increased the OP by under partitioning rather than just leave the extra 100GB as free space) would be huge.

    [NB everything in 1 above will still apply, but it will weight it more heavily towards the R0 setup.]


    Oh, & whilst not what you've asked, there's the obvious mixed advantages of, depending on the actual capacities & the model, (usually) faster small r/ws on the single larger SSD vs faster high QD & sequential r/ws on the R0 setup.


    As to having 2 M4s in R0 - it's not that they're inherently shonky, but comparatively so - the 6Gb/s SFs are simply much better suited to non-trim.

    What it really depends upon is the r-e-w cycles compared to (a) the idle time & (b) the amount of free space (&, if you can manage it, under partitioning for extra OP - but you prioritise free space first).

    Well, if you had a R0 array of SSDs with (comparatively) 100% static data - ie as a games installation drive where, you relatively infrequently uninstall one game & install another - then you could effectively get away with having no GC, free space or OP...

    ...but the more that's written to & deleted from the array, the better the GC & the more free space & OP that will be needed to keep it optimal.

    [NB as mentioned, extra OP also dramatically increases the longevity as i've linked to a few times before... ...too tired to find it atm.]


    [Edit]

    Before finally going to bed, i've just remembered where the latest endurance figures for increased OP came from - where it says "Product Spotlight" (on the right hand side towards the bottom of the 1st page)...

    ...& from Anand's testing, the 200GB model has 37.5% OP as standard - so it's clear that extra OP has a significant effect.

    (i'm not exactly sure how Anand got >41% as he's using GB for the 200 usable & & 320 total nand capacities, but then only treating the former as Gb - hence the 37.5% is treating them both as the same unit.)
     
    Last edited: 1 Dec 2011
    jtek likes this.
  14. jtek

    jtek What's a Dremel?

    Joined:
    17 Feb 2009
    Posts:
    25
    Likes Received:
    0
    Ok back to the practical, so I'm in the market for a really fast SSD setup for my new build, one of the reasons I want to go a bit overboard with it rather than just get one is for longevity, I suspect the SSD will go out of date the fastest compared with other components. In other words I suspect this will be the area that sees the biggest performance increases over the next few years.

    So, what would you recommend for a Raid 0 setup? I'll be running it as my boot drive, plus it would also have all my programs on it. But no user data.

    I had my eye on the performance pro because of the claims they make about 'raid compatibility'. I think the M4 is out because of its lacklustre garbage collection.

    I didn't realise that sandforce drives were ok to run in raid without trim. What kind of performance degradation am I likely to see?

    With the marvell controller I was under the impression they have to write their own firmware from scratch... or do they get given a standard one by marvell, and then tweak it?

    I mean it's possible that the GC on the M4 has been massively improved with the recent firmware updates and we just don't know about it. The point I'm making is Corsair must have done something different from everyone else to be able to say their drive doesn't suffer from not having trim. Particularly as the only other popular drive with that controller is known to be extremely weak on this point.

    It's a very easy thing to test, if I get a pair I'll know soon enough by monitoring performance. I just don't want to be the guinea pig.

    I must say I'm surprised by how few good reviews there are of SSDs and how poor most of them are, particularly since there has been an almost unprecedented lull in the pace of hardware development recently, with nothing of note happing on the CPU or GPU front.

    Is it really too much to ask, for someone to get all the major SSDs and actually test the GC, test the degradation in raid, and get some answers.

    So if I under-partition my SSD, the controller will still use all the available nand, both for OP and for performance using all channels on the controller? Is this still worth doing if you aren't going to fill the drive up, is there an advantage to preventing the OS from touching a chunk of the memory?

    I thought it was just sandforce drives that over provision, are you saying all SSDs have far more memory than they actually present to the user, in order to increase lifetime? In the same way that harddisks have spare sectors, but on a much bigger scale?

    Again this is a question the reviewers ought to be asking, they'd sure get an answer.
    I suspect it behaves exactly like harddisks do when they run out of re-allocatable sectors i.e. very stupidly. Making the drive unusable by corrupting data, rather than shrinking it which both harddisks and ssds could do.

    So is there a way to manually add to the over provision nand? I assume the only way is firmware flashing the drive to think it's a smaller one as you mentioned. Under-partitioning doesn't do this I assume?
     
  15. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    if you actually need R0, i would go SF - ideally a 3Xnm sync nand model like the V3 max iops or Patriot Wildfire (i'd priced the former down to ~£165 each using a mix of cashback & voucher codes from dabs a day or two ago - buying them as 2 orders) - simply as they are far more robust than the alternatives in non-trim environments.

    [Again, 'if' you had issues then you return them for a refund & try something else.]


    There's then no reason at all not to have your user data on - though naturally moving things that would thrash them (ie p2p downloading) or are just random storage (ie if you download large amounts of video or audio) would be a bit pointless. Similarly, you would need a very good reason to want to do something like video editing or transcoding on them - well, as the process is highly sequential, fast dedicated source & destination HDDs are a vastly better alternative.

    So it's more about (re)moving the data types that will not gain anything than arbitrarily deciding that all user data is 'bad'.


    Then all SSDs are nominally raid compatible - well, before GC & Trim then they were (though the speeds suffered hugely without the use of specific tools &/or frequent SEs)


    As to performance degradation, it's really down to your cycles vs free space & OP... ...well, the V Turbo (indilinx) in this machine does not slow down as it has both stupid amounts of free space & OP...

    ...but if you kept the OP as standard & had almost no free space then there will be far more degradation in speeds.


    Afaik, there are standard initial f/ws for the Marvell controllers... ...but the different manufacturers then alter & validate them individually, & there's no official direct 'cross feeding' of ideas...

    ...effectively meaning that if you were to jump upon the wrong manufacturer then you'd could end up with a significantly inferior SSD.

    This is a bit of a difference between them & (what's happened so far with) the SFs as, with the latter, the premise is that all of the manufacturers test & tweak & whatnot & then feedback to SF which then informs the next base f/w release that goes out to all of the manufacturers...

    Now, even with the SFs, there are some manufacturers who are very proactive with testing & getting f/ws out there (ie OCZ) & others which aren't (inc Corsair)... When you have a mature & stable product, this isn't of huge concern, but when something's new (whichever controller we look at) then it's something that has to be kept in mind that may come into play - there being no way of knowing other than in hindsight.

    [NB in general, whilst Crucial have cocked at least one f/w release f/w - there was one that bricked loads of SSDs - both they & intel (though them not adding trim/GC to their 2009 products was very shonky) are also usually 2 very good companies with f/w.]

    So anyway, whilst it 'could' be the case that Corsair have done something magical with the GC, 'a' concern would be that that they're not known for being the fastest moving company for f/ws - if there were to be any issue.


    As to testing in raid, assuming you work to roughly the same %age usage & looked at comparable sized drives, you can roughly double the the 'lets copy all of the data in the world on & off with trim disabled' testing (which i despise as they're nowhere near most people's r.l.) as the worst case that you could ever see...

    (if you want to buy me lots of SSDs then i'll happily play with them btw ;))


    Now if, for your personal r.l. usage, you did notice a significant slow down then you could simply add in artificial idle time (as previously described) to allow the SSDs to recover.


    Atm, all SSDs have some OP - the 6Gb/s consumer ones all have at least ~6-8% OP...

    (The SFs also allocating some nand for "raise" - which i mentioned earlier, dramatically improves the ability to recover from errors & reduces the error rate)

    ...& enterprise ones (usually) have far more.


    Now, if you had a 100GB SSD with 110GB of nand in total (ie 10GB of OP), & under partioned it by 20GB - so 80GB of accessible space - the controller would still see the full 110GB & would use the 30GB as OP...

    ...so, yes, under partitioning does increase the OP...

    (though i do not quite know if the mod on the OCZ forum had actually needed to force flash an alt f/w on his (original) vertex - as it was an aside almost 2 years ago, from recollection, then possibly more as a 'proof of concept' as the main function of the post had been to describe how OP had made the drive vastly more robust)

    ...remember, of course, that the OP area isn't a specific set of nand cells but the blocks that contain it can be spread all around the SSD(s).

    &, naturally, if you stuck 2 drives in R0 & under partitioned them, there'd a 50:50 split between the 2 with the extra OP - or with my 4, there's a 25:25:25:25 split.

    [NB 'if' you have the option with your raid controller, the best method is to reduce the size of the array when you're creating it...

    &, generally, you aim for a larger stripe size to promote large sequential speeds & a smaller one to promote smaller ones - for most consumers, something like a 64K or 128K stripe will give the best balance.]


    Then, what i did with my SSDs was to buy capacities such that i could make things as robust as possible - maintaining at least 25% free space (it's usually a bit more than this) & 28% (total) OP on my V2 array // though with >35% free space & ~55% (total) OP on the V Turbo in this machine (& more free space on the other V Turbo that i use as a scratch disk).

    This, however, is simply me having looked at my budget (at the time) & bought on the basis of prioritising the maintenance of speeds & longevity - whilst having more than enough space for what i wanted to install...

    (& of course the V2 array replaced the V Turbos rather than aiming for such a huge amount of FS & OP on them)

    ...so i am *not* saying that if this is not followed then things will be instantly shonky & people's SSDs will die in seconds.


    instead, it's always been about trying to show that there are significant proven advantages to (a) firstly maintaining a decent amount of free space (initially prioritise this over extra OP) & (b) secondly increasing OP...

    ...& leaving it up to the individual & their budget to make the best choice they can.

    [NB again, if you have (effectively) 100% static data then you do not really need any free space (other than enough to stop Windows crying that the drive's getting full), trim or GC - but having something like a game installation only drive is not most people's first priority for SSDs.]
     
  16. jtek

    jtek What's a Dremel?

    Joined:
    17 Feb 2009
    Posts:
    25
    Likes Received:
    0
    All fascinating stuff again, much appreciated. Hope no-one minds the thread hijack.

    Well of course I don't need Raid 0, but I'm probably going to have it anyway, because I can :p

    Which models have '3Xnm sync nand' and how do I tell?

    I'm aware of the Dabs deal with the cashback and the voucher codes (can the cashback site code be used along with the website one?), I'm about to take advantage of it for my motherboard and memory.

    I'm on a budget of £300, what would you do? Can I get '3Xnm sync nand' for that, if not is it worth breaking the budget for?

    I run 4x 2tb Seagate 5900rpm in my fileserver, they go over 350MB/s sequential. So I should be able to make use of that if needed to reduce wear on the SSD. Although getting video editing software to use it rather than the boot ssds may be a challenge.


    Edit: Corsair Performance Pro gets 320MBps write, is the Vertex 3 max iops faster even for incompressible data?
     
  17. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    i've always used as many things as possible, to lower prices so can't see why it'd be any different... &, with one of the cashback sites, you can get £10 off a £149 spend + 4.12% cashback (i believe off the ex-VAT remainder) until the 4th - hence suggesting splitting it into 2 orders.

    Okay, even doing that, you'd be ~£30 over the £300 limit you've given, but...

    [this was the cheapest way i could see of getting 2x 120GB 3Xnm SFs]


    As said, unless you're looking at ~99.9% incompressible data, the AS-SSD results for the SFs are as meaningless as the ~100% compressible data scores...

    But, for the sake of giving them, for reads a single 120GB max iops will give ~500-505MB/s incompressible sequential reads (up to a max of ~545MB/s) - & with writes ~240MB/s with 99.9% incompressible data (up to a max of ~505MB/s)...

    ...but general r.l. OS/apps/whatever data (with the exception of things like jpegs, mp3s, flvs, etc) tends to be neither one extreme or the other - so, unless you are aiming to have a significant usage being highly incompressible files, you are actually looking at results in between the 2 extremes.

    [bittech or other review sites 'suggesting' that either AS-SSD results or ATTO results have anything to do with most people's r.l. usage is very misguided information - unless, again, you have a very specific usage...

    ...& similarly, neither b/m gives a reasonable breakdown of r.l. QDs - they tend to be ~3-7 or 8 for small r/ws within normal OS & whatnot usage or, with AS-SSD, any r/w sizes other than large sequentials & 4Ks - which isn't r.l. usage for most consumers.]


    Now, this is where something like Anand's light & heavy workload b/ms come in as they compare drives using the actual types of data that would be used by most people using a SSD as an OS, apps, games, user data, etc drive...

    [NB i am not saying that these are perfect, but they're a damn sight better than 2 generally meaningless extremes.]

    So, using a f/w at a time where, as bittech noted (& i've previously mentioned that it was part of the trouble shooting exercise that SF were going through), speeds had been decreased for the SFs - you end up with the comparative speeds shown here (& on the next page) for the 2 different workloads compared to the SSDs that were out at the time.

    [NB these were before the 0009 f/w update for the M4, but the 120GB max iops out performs the 256GB M4 when you compare the results in the previous link to these - the 128GB M4 wasn't tested.]


    Now atm, Anand hasn't tested the new Corsair Marvell thing, but it would need to be significantly faster than the M4 irl across a broad range of data sizes to catch up for most people & would need something magical doing with its GC to be as good in non-trim.


    Again though, i am *not* suggesting that, for example, the M4 is shonky & it has to be appreciated that it is noticeably cheaper - but it isn't as fast (even with the new f/w) for general usage/unless you had a *really* unusual usage type that was primarily based upon incredibly compressed writes...

    ...&, again, there are advantages with lower write amplification & better error correction & whatnot to the SFs.


    Also, again, i'm also *not* saying that the M4s or the new Corsair Marvell thing or whatever other non-SF will die or become instantly shonky in a non-trim environment - simply that the SFs are more robust in non-trim with heavier cycles...

    ...nor promising anyone that they would not have cold boot or freezing issues with a M4 or bsod issues with a 6Gb/s SF - again, in both cases, it's 'luck' (or lack of it), but the chances of either are very slim.


    As with most things tech, it's all balancing cost with other advantages & deciding whether the premium's worth paying for you, as an individual, for your usage.

    Well, it's not my money you're spending so...



    [Edit]

    Quick addition - my saying that the Anand's light & heavy b/ms are not perfect is largely that every person's actual r/ws will be (at least subtly) different.

    Either one or the other will be, however, a very good indicator of r.l. performance for the vast majority of the consumer end usage...

    ...& if you had a very unusual/particular data type usage that fell outside of them then you'd know about it.
     
    Last edited: 2 Dec 2011
  18. drakanious

    drakanious What's a Dremel?

    Joined:
    12 Apr 2007
    Posts:
    24
    Likes Received:
    0
    One important consideration is that the speed of a SSD is sometimes highly dependent on the size of the drive. This seems to be especially true for SandForce based drives.

    I agree with PocketDemon: For getting a quick overview of a SSD's performance, I think one of the best places to go is AnandTech's SSD Bench. I don't trust synthetic benchmarks when I look for a graphics card; why should I trust them when I'm shopping for a SSD?

    Here's a comparison of the OCZ Vertex 3 120GB and M4 128GB: http://www.anandtech.com/bench/Product/350?vs=425

    For some workloads the Vertex 3 is faster, and for others the M4 is faster.

    Here's another comparison; this time between the 240GB Vertex 3 and the 245GB M4: http://www.anandtech.com/bench/Product/352?vs=355

    Now the Vertex 3 is faster!
     
  19. Slizza

    Slizza beautiful to demons

    Joined:
    23 Apr 2009
    Posts:
    1,738
    Likes Received:
    120
    Doesn't say what firmware is being used.
     
  20. drakanious

    drakanious What's a Dremel?

    Joined:
    12 Apr 2007
    Posts:
    24
    Likes Received:
    0

Share This Page