Storage SSD lose speed?

Discussion in 'Hardware' started by Siwini, 4 Nov 2010.

  1. Siwini

    Siwini What is 4+no.5?

    Joined:
    14 Sep 2010
    Posts:
    617
    Likes Received:
    33
    Is that true that you lose speed in SSD after time? If so, can anyone explain me why? Reason I asked cus this dude first time he bought SSD & installed it, he run 7.2 in windows experience week or two later it went down to 6.8 - something like that. Why spend the ridicules amount of money that’s only good for a week or so? What is up with that nonsense:confused:
     
  2. wyx087

    wyx087 Homeworld 3 is happening!!

    Joined:
    15 Aug 2007
    Posts:
    12,007
    Likes Received:
    721
  3. Marine-RX179

    Marine-RX179 What's a Dremel?

    Joined:
    24 May 2010
    Posts:
    406
    Likes Received:
    14
    Also, if you haven't learn by now...you should ignore WEI. It might serve ok as an reference, but it is hardly an accurate mean of indicating performance.
     
  4. Ph4ZeD

    Ph4ZeD What's a Dremel?

    Joined:
    22 Jul 2009
    Posts:
    3,806
    Likes Received:
    143
    You should actually run respectable benchmarks to see if the speed has changed. I personally don't base my purchasing decisions on what a "dude" tells me.
     
  5. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Without knowing what SSD it is then it's not exactly easy to pinpoint the most likely issue...

    ...well, along with wuyanxu's comment about trim (which 'may' be more or less valid depending upon the SSD), without knowing the model (as different controllers perform differently), the actual setup (as there could be a separate issue) & looking at the write profile (naturally, 'if' you ran any benchmark immediately following a very heavy read-erase-write (& then erase) cycle, then any b/m is going to be slower) then it's very difficult to guess a cause...


    However, most SSDs now have a version of garbage collection (which normally can be forced to run by changing the power settings to never turn off HDD & logging off over night 'if' you notice a significant slowdown), that will enable the SSD to recover performance independently of trim availability.


    Personally, using either my older indilinx SSDs or my SFs in raid arrays (so there was/is no trim) then i've never found the need to do anything special to them & performance stays consistent in far more informative b/ms than wei...

    ...partially that's my setups including decent raid cards (so there's no separate issue), partially that's choosing the SSDs wisely & partially, with the SFs (as an example of how the SSD is important to the actual usage), my read-erase-write profile isn't designed to hammer the drives with media encoding/editing/etc which would cause the speeds to deteriorate more quickly -> i have 15K7 enterprise hdds for that.
     
  6. MaverickWill

    MaverickWill Dirty CPC Mackem

    Joined:
    26 Apr 2009
    Posts:
    2,658
    Likes Received:
    186
    Isn't it also the case that the Windows Experience Index updates regularly, to make room for newer hardware at the top of the scale? Could just be that the numbers updated on the WEI.
     
  7. perplekks45

    perplekks45 LIKE AN ANIMAL!

    Joined:
    9 May 2004
    Posts:
    7,558
    Likes Received:
    1,813
    What exactly is it good for anyway? It was supposed to give users a rating system for new software. I.e. "you need to have a system with a score of 4.0 or higher to run X" but if they keep changing the scale, as they have to to keep it halfway in line with hardware development, software packaging has to be updated as well... sure thing, eh?
     
  8. MaverickWill

    MaverickWill Dirty CPC Mackem

    Joined:
    26 Apr 2009
    Posts:
    2,658
    Likes Received:
    186
    Nah, IIRC, it's just the top of the scale that changes. The 1-5 (or whatever it is) that programs base themselves on stay more or less constant, but the top gets squished.

    At least, that's how it worked with Vista. Whether 7 kept the same logic or not, I have no idea. This is just conjecture.
     
  9. perplekks45

    perplekks45 LIKE AN ANIMAL!

    Joined:
    9 May 2004
    Posts:
    7,558
    Likes Received:
    1,813
    Well... that would make more sense but what about games? Cutting edge hardware today is, quite literally, yesterday's news in a couple of weeks...

    Anyway, this is not what the thread is about. ;) I, too, would say use HD Tach or ATTO's benchmark to test the SSD and check against the official numbers and other tests on the 'net.
     
  10. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
  11. DarrenH

    DarrenH What's a Dremel?

    Joined:
    12 May 2010
    Posts:
    304
    Likes Received:
    3
    I would ignore the WEI score. By far the lowest score of my new PC is the new Samsung F3 1TB HDD at 5.9 but it is by far the fastest I've ever used. It's over 3 times faster than the old IDE drive I had XP on and feels mighty quick to me.

    I can only imagine how fast 2 of these in Raid 0 would be. But the speed is more than enough for me no matter what Windows says. Don't know about the SSD though as I've not used one yet.
     
  12. sb1991

    sb1991 What's a Dremel?

    Joined:
    31 May 2010
    Posts:
    425
    Likes Received:
    31
    The hard drive (even a spinpoint f3) is still the major bottleneck in most programs people run. Obviously it's different if you're gaming/rendering, but for the standard web browser/office/music player stuff, the hard drive is the piece of hardware holding things back... not that it really matters, but it's called the 'experience index', not 'reliable high-performance compute benchmark'.
     
  13. Siwini

    Siwini What is 4+no.5?

    Joined:
    14 Sep 2010
    Posts:
    617
    Likes Received:
    33
    K, but that's not what I ask. What I want to know if its true? I wana buy ssd but wich one to get my budget is $250
     
  14. Dae314

    Dae314 What's a Dremel?

    Joined:
    3 Sep 2010
    Posts:
    988
    Likes Received:
    61
    Read the very first reply post (actually the link's broken so here).

    here's the wiki article on wear leveling

    Here's some other quotes from the SSD article in wiki:
    • Wear leveling used by most SSDs intrinsically induces fragmentation. Moreover, defragmenting a SSD by a defragmenter is harmful since it adds wear to the SSD for no benefit.
    • SSD write performance is significantly impacted by the availability of free, programmable blocks. Previously written data blocks that are no longer in use can be reclaimed by TRIM; however, even with TRIM, fewer free, programmable blocks translates into reduced performance.
    • As a result of wear leveling and write combining, the performance of SSDs degrades with use. However, most modern SSDs now support the TRIM command and thus return the SSD back to its factory performance when using OSes that support it like Windows 7 and Linux.

    Basically as long as your OS supports the TRIM command (basically every OS except Apple) your SSD won't experience read/write fatigue.
     
  15. jrs77

    jrs77 Modder

    Joined:
    17 Feb 2006
    Posts:
    3,483
    Likes Received:
    103
    The best thing is, that if you're using a rather small SSD (less then 100 GB) and have your temporary files and cache set to use this SSD, then your SSD won't live very long, as the possible write/erase-cycles for every single block will be reached rather fast.

    This is only a problem, when you heavily use your system on a daily basis ofc and also applies to your usual harddisk, but still something to make your mind about as SSD do cost way more then standard HDD's.
     
  16. Siwini

    Siwini What is 4+no.5?

    Joined:
    14 Sep 2010
    Posts:
    617
    Likes Received:
    33
    I research a solution. Disabling Indexing, SuperFetch, Pagefile ect,. will gratly decrease the chance of SSD goin kaput. There is no cache on the SSD, so there are no benefits to write caching. There are conflicting reports on whether this gains speed or not. Even things like Hibernate can mess your SSD up if not disabled.
     
  17. perplekks45

    perplekks45 LIKE AN ANIMAL!

    Joined:
    9 May 2004
    Posts:
    7,558
    Likes Received:
    1,813
    I'm sorry, but did you actually read any of the SSD reviews on BT?
    There is cache on SSDs, they explain TRIM and its effects on performance over a long-ish period of time, they even have an article dedicated to just that topic:

    Why You Need TRIM For Your SSD

    Latest tested SSD:
    Crucial RealSSD C300 Review 128GB

    This should help you a lot.
     
  18. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    it depends upon the SSD/controller as both the gen1 nand based SSDs don't have cache (it was added to the gen2s to remedy the stuttering issues), & the SF controller SSDs don't have cache in the same context as the other gen2 SSDs (there's a small amount on the controller itself, but it doesn't effect things in the same as the DDR cache on the others - in theory, this 'should' make the SFs cheaper into the long run, but hasn't quite worked this way yet).


    As to that bittech article about trim...

    ...well, i've said before that 'if' you are proposing to copy 100MB onto a 128GB SSD & then delete it to then do the same another 9 times in short order then of course there will be a slowdown.

    However 'if' you actually use a SSD more normally as an OS/apps/games drive (rather than like a $&%$^*& idiot) then garbage collection on most modern SSDs will keep things nice & quick - okay, the C300 is less good at this, but most other Gen2 SSDs (with the latest f/w) will happily work in non-trim environments without such disastrous consequences as those claimed in the article.

    Similarly (again as said before), i've happily been using SSDs in R0 arrays (so no trim) for over a year & there has not been this slowdown.

    So, as with many things, if you act foolishly then you may well get shonky results, but otherwise it's a pretty poorly conceived article with little, if any, bearing on the huge majority of real life usage.
     
  19. Baz

    Baz I work for Corsair

    Joined:
    13 Jan 2005
    Posts:
    1,810
    Likes Received:
    92
    While our testing is extreme, it undoubtedly showed that SSD performance does drop off following extensive use unless you use TRIM. I've replicated these results with every single SSD out there, SandForce, Indilinx, Crucial; if you don't run TRIM, performance WILL degrade following extended use. Of course, this effect will be lessened in real world circumstances, but it will still be there. While you mightn't have noticed your RAID0 arrays dropping off in performance, I'm confident they'd be running notably slower than when the array was first built. Whether you've noticed it or not, it's just a fact of how SSDs work.

    There seems to be a whole lot of confusion going in this thread too, so allow me to clear some stuff up.

    Firstly, running a small SSD and not disabling the page file/caching/hibernation/umpteen windows settings will NOT kill the SSD in double quick time. While it might lead to unncessasry writes and will in the long run reduce the drive's life, drives pack wear leveling algorithums and can handle hundreds, if not thousands of write cycles. We've run Indilinx SSDs in your graphics test rigs for 18 months now, fully reimaging the drives on roughly a weekly basis and they still work perfectly. While there is of tweaking information out there, with the addition of trim to restore performance, users don't NEED to (but can still benefit from) tinker with page file/caching/windows settings. In all my time reviewing and testing SSDs, I have never killed a drive due to over-use. What's the point of adding a new bit of hardware to your system if you then have to go and turn off all the system's features? These are consumer products and don't require you to disable all your systems caching/pagefileing setup to work well.

    Secondly, yes, all SSDs will suffer from some degree of performance degrade if they're run without TRIM and used heavily. Of course, running a drive in a desktop environment with 30GB free, or even in RAID0 with empty space, you're unlikley to see too many issues, but fill a drive up and start deleting/re-writing and you're going to see the performance hit from read/modify/write cycles as the drive clears out the junk, undeleted data from the cells. WHile some drives are less susceptable than others, all drive controllers are affected to some degree, which is why you need TRIM on SSDs.
     
  20. PocketDemon

    PocketDemon Modder

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139

    i'm really not convinced that the trim arguments are actually valid with two exceptions - the first being that of very heavy usage (which i omitted as it's not the general user's 'normal' usage type) & the second i'll mention later.

    Now, the reason why i disagree is that, under your testing methodlogy in the article, you're attempting to ensure that every nand block is dirty including any over provisioning - obviously data might contain a large number of 1's (nand cells having an empty state of 1 not 0 of course) that a single write erase cycle wouldn't accomodate - thus, by utilising the wear levelling of the controller, this 'should' ensure that all of the nand blocks end up dirty... ...though perhaps 2 runs of something like AS-Cleaner without the FF box checked would have made every cell dirty & given a truer (& 100% comparable) 'worst case scenario'(?).

    in this situation then of course there will be a noticeable drop off in, predominantly, write speeds (though the V2s tested more of a drop in read speeds whilst largely maintaining writes when anandtech performed a similar test // the C300 faired incredibly badly in writes) either without trim & before GC has had a chance to kick in or before trim has had a chance to run (for example the SFs are less responsive to trim commands), as the controller has to perform an erase cycle on every nand block that the b/ming software tries to write to.


    What is not the case though is the assumption that is then drawn from this in the article -

    "While writing 1TB of data to a 120GB drive might seem a bit extreme, and for the majority of users will be far less than you'll write to your SSD in possible its lifetime, our objective here is to demonstrate the wear on a drive over an extended time period."

    - since this only holds water if, without trim, there were no other cleaning process occuring (obviously ignoring manually run ones as this was mentioned in the article) & so every read-erase-write cycle cumulatively left more & more dirty nand blocks until the SSD got to the point where every block was either used to store data or dirty & the results of the testing in the article were accurate.

    This, however, clearly isn't the case with the latter SSDs as GC will operate (some to a greater effect than others - as said, the C300 isn't one of the better ones for this) using idle process time OR can be made to run more proactively via an overnight log-off (with HDDs set to never turn off) if necessary within a heavier write environment.


    Hence, your assertation that -

    "While you mightn't have noticed your RAID0 arrays dropping off in performance, I'm confident they'd be running notably slower than when the array was first built. Whether you've noticed it or not, it's just a fact of how SSDs work." (my underlining)

    - doesn't actually hold true (unless, of course, you happened to know that i was attempting to write foolish amounts of data to it & then delete it & then write... ...& then... ...on a very regular basis - which i don't).

    My testing back with a pair of 120GB V Turbos in R0 showed that AS-Cleaner (the manual program recommended for those drives that sets all empty blocks to 1's with the FF box checked - hence the empty blocks are 'as new') gave no advantage in any b/m over simply using the array normally with GC kicking in when it wanted to.

    [quick note *DO NOT* use AS-Cleaner on SF controller drives as it screws with the DuraWrite tech - however it would work with any other currently available as a quick manual solution]


    Going back to the beginning, there is, i accept a second potential limitation - (as you wrote)

    "but fill a drive up and start deleting/re-writing and you're going to see the performance hit from read/modify/write cycles as the drive clears out the junk, undeleted data from the cells."

    - however, i'd argue that that's both more a case of an individual having bought too small a drive for their usage &, entirely separately, is the rationale for extra over provsioning which helps to further maintain free blocks which can be written to.


    Now, within all of this, i'm neither saying that trim is shonky nor that it wouldn't be advantageous if it were implemented for every OS & for raid arrays, however it isn't the case that it's necessary &/or you could actually see anything like the noticeable slow down that was shown in the article (or, for most users, measure any slow down at all) within the normal usage of the vast majority of users (previous notes applying).



    As to the rest of your comments, you're perfectly correct that tweaking is an optional thing that, at times, can have marginal gains - though i guess it's something that a significant no of people on here would look to do as they're either kind of tech savvy &/or want to get the best performance for their money.
     

Share This Page