1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Storage Samsung 830 Write speeds

Discussion in 'Hardware' started by Pookeyhead, 13 May 2012.

  1. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    One thing that Anand always seems to miss out from his calculations of nand longevity is the effect of increasing the write speed.

    Well, write amplification is literally a multiple of what is logically written vs what is physically written - typically no more than ~12x, but you get odd drives that are higher & the SFs are much lower.

    Now, what Anand has always appeared to have done is to multiply the nand spec by the capacity & then divide by whatever measure of write amplification he's decided is correct for whatever workload to give a total amount of data that can be written...

    ...& then made assumptions as to how this could be broken down into XXGB a day over X years.


    So, what this doesn't appear to account for though is that, as you increase the write speed with given nand, the longevity decreases (see page 12 here for example).

    Funtionally, whenever you write to a nand cell, it gradually reduces the gap between the nand states - when the gap becomes too small then the cell fails (see here for example) - & the increase in write speed accelerates this process.


    Whilst not a great example (as SFs have much lower write amplification), the 29F128G08CFAAB intel/micron sync nand is used in a variety of 240GB 6Gb/s SFs...

    ...so since, as referenced, if you increase the speed that you write to the nand, this accelerates the rate at which the gaps reduce, so as they use identical nand but, for example, the Force 3 GT writes sklightly more quickly than the V3, the V3's nand will last longer.

    [NB there aren't enough non-SFs to pick out ones using the identical nand at different speeds, & you can't automatically assume that nand of a given rating is running at the same speed...

    ...it is, incidentally, a reasonable reason as to why the <=256GB M4s had to have a lower overall write speed since they have a much higher write amplification to compensate for to maintain a reasonable longevity...]​


    [Edit]

    Oh, & as it's popped into my head as another example, all of the 6Gb/s SFs will slow the write speeds down very temporarily on short term excessive writes & also have a long term overall protection to dynamically balance write speeds vs the amount of data written over the lifespan.

    This is part of the durawrite tech - greatly enhanced from the 3Gb/s ones where the short term slow down could be for much longer - & the sole reason is that writing too much, too fast, can lower the nand longevity to below the warranty period.

    Unless you're overly b/ming, had a pretty niche usage (either of the previous with very little free space & OP) or did something foolish (like using a SF for downloading numerous huge torrents to - which is foolish for any SSD) then neither's that likely to be overly/regularly triggered in a normal home user's usage.

    [NB As i'd written at the end, i am not saying in this that you 'should' buy SFs - just commenting on the tech as an example within the topic... ...well, if there wasn't an issue, there'd be no point in having the tech.]​

    [End Edit]


    * * * * *

    it's not just about nand longevity though...

    it also improves performance with (esp) random data through improving the chance that a write will have access to a pre-erased block & the maintenance of speed through additional OP for managing program/erase (P/E) cycles and .

    Whilst not covering all of it, see this intel white paper as an example... ...& this does cover the maintenance of speeds to some extent.

    (the 520 isn't an enterprise drive btw - just a 'normal' 6Gb/s SF)

    [NB the intel white paper does suggest that increasing the OP *has* to be done from a clean state... ...which is best practice anyway.

    There being a difference between the OP & free space as i described, ttbomk, a fortnight ago -

    "Secondly, unpartitioned space is not quite the same as free space.

    This is basically because when you write to a SSD you do so in pages, whereas to erase pages you have to erase all of the pages in the block - moving any needed data into a new block.

    Now, with free space, this quantity of blocks can get increasingly 'fragmented' (for lack of a better term) over time - with more & more blocks containing pages of invalid data that will need the valid data moving entirely before they can be written to again...

    [Edit 2 (v.2)]

    ...whereas unpartitioned space (as with the standard OP) increases the number of blocks that will normally(#) have free pages d.t. the way that the internal registers work - & so can gain from benefits in your 1st quote (& below).

    [# it is, of course, possible to hammer a SSD with almost any amount of OP into the ground with ridiculous amounts of writes... ...but the more OP you have, the less likely you are to suffer speed issues.]

    Whilst it's a subtle difference, ttbomk, it actually comes down to the formatted space being part of the logical registers which makes them unavailable immediately for all the facets of GC & whatnot within the physical registers...

    ...whereas standard OP & unformatted space are available within the physical registers for GC & whatnot.

    The total amount of nand equivalent to the OP area normally gets extra priority when it comes to pre-erasing blocks for future writes."
    ]​


    * * * * *

    Otherwise, forgetting not reading beforehand that a f/w update happened to be destructive, it is far more likely that you'd lose data either through a controller failure/panic or as a result of a f/w issue than d.t. normal issues of nand longevity.

    Not least as, according to the spec, nand 'should' become read only at EOL allowing you to copy all of the data off.


    However, it is still possible though to have an entire nand die fail & lose everything on there (most SSDs effectively work internally in R0) - which is 'a' reason why SF's raise tech is advantageous in this regard since it can survive that level of catastrophic failure (as the SFs effectively work internally as R5).


    Having said that, even with SFs, a good backup regime is the only decent option imho...

    Well, it wouldn't be a great idea to not backup a R5 array of HDDs as with >1 drive failure or data loss d.t. other reasons (malware, user error, etc) then data's lost.

    [NB i am not saying in this that you 'should' buy SFs - just commenting on the tech.]​
     
    Last edited: 5 Jun 2012
  2. boz4442

    boz4442 What's a Dremel?

    Joined:
    8 Mar 2010
    Posts:
    12
    Likes Received:
    0
    That's a good (and long) read, but good info to know.

    Thanks
     

Share This Page