1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Hardware Making Sense of Next-Gen SSDs

Discussion in 'Article Discussion' started by Dogbert666, 1 Dec 2015.

  1. Dogbert666

    Dogbert666 *Fewer Lover of bit-tech Administrator

    Joined:
    17 Jan 2010
    Posts:
    1,678
    Likes Received:
    181
  2. schmidtbag

    schmidtbag What's a Dremel?

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    SATA Express was such a stupid idea. Nothing about that seemed intelligent or necessary. It's basically like "Hey, remember those wide cumbersome IDE cables that nobody wants anymore? Let's make something similar to that except only allow 1 drive per cable, over-complicate the technology behind it, and make it more inconvenient and expensive!"

    My only complaint about M.2 is that it included a SATA connection. That was completely unnecessary. Y'know what normal SATA ports are [usually] connected to? The PCIe bus. M.2 should have been nothing more than just the next generation mini PCIe connector, and if you wanted to use an SSD it wouldn't have been that hard to just have it include a built-in SATA controller. In fact, considering how excessively long these cards can get, it would have been interesting if you could get a M.2 SATA controller and plug in a mSATA SSD directly into it.

    U.2 doesn't impress me either. Though it isn't as unwieldy as SATA Express, it's still clunky. It may be simpler than M.2, but it appears to be almost completely unusable in mobile platforms (whereas M.2 can work in just about anything). And even then, if you're using a desktop PC you might as well go for full-size PCIe.


    So here's an idea: how about engineers work toward SATA IV and stop needlessly overcomplicating things? Unless U.2 and M.2 will have support for future versions of SATA, it really doesn't make sense to me why they'd involve a technology that's already heading toward obsolescence.


    Meanwhile, since the average desktop PC these days seems to come with at least 6 SATA ports, I'm going to take the easy way out and do RAID 0 with a bunch of cheap SSDs.
     
  3. Dave Lister

    Dave Lister Minimodder

    Joined:
    1 Sep 2009
    Posts:
    880
    Likes Received:
    12
    I saw windows compatibility mentioned in the article but what about linux compatibility ?
     
  4. phuzz

    phuzz This is a title

    Joined:
    28 May 2004
    Posts:
    1,712
    Likes Received:
    27
    Linux NVMe support you ask?
    Since kernel 3.1.
     
  5. Dave Lister

    Dave Lister Minimodder

    Joined:
    1 Sep 2009
    Posts:
    880
    Likes Received:
    12
    Cheers, good to know as I don't like the way windows has gone after windows 7.
     
  6. SchizoFrog

    SchizoFrog What's a Dremel?

    Joined:
    5 May 2009
    Posts:
    1,574
    Likes Received:
    8
    I currently have 2x 256GB 840 EVOs (One in my laptop and the other in my main PC, both as OS drives). I have found it a hassle having such small drives and so I am thinking of buying a 512GB 850 EVO in Jan for the laptop and then using both of the 840s in RAID 0 as an OS drive for my PC... Is this a good idea? Will this increase speeds (don't go by my current PC specs, the system will get an upgrade to Ivy or Skylake first) over a single drive or not? Any other Pros or Cons?

    Thanks.
     
  7. schmidtbag

    schmidtbag What's a Dremel?

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    First, I'd like to point out that disabling your hibernation file, cleaning up Windows update files, turning off system restore, and shrinking (or entirely removing) your paging file can save you as much as 15GB. Not to mention, a lot of those things just eat away at your write cycles. Also if you use Google Chrome, I think by default it just keeps eating up disk space for cache and never cleans up after itself, but I could be wrong about that.

    Doing RAID does result in more CPU overhead and will probably slow down your boot process, but not by much. The real-world performance of it will vary depending on the task. If you do a lot of sequential read/writes, you'll probably see a hefty performance boost. From what I heard (I haven't confirmed this myself) there isn't a noticeable performance increase for random reads and writes with small files. It usually comes down to your RAID controller and/or CPU.


    Personally, I'd advise you get something like a 32-64GB SSD to boot off of (and for important personal files) and then do the RAID setup for everything else. This will improve your boot performance significantly, and if you use 100% software RAID then the array becomes "portable".
     
  8. bawjaws

    bawjaws Multimodder

    Joined:
    5 Dec 2010
    Posts:
    4,362
    Likes Received:
    968
    Personally, there's no way I'd go for a 32GB SSD for a Windows installation (even assuming you could find a 32GB SSD anywhere). 64GB is much better, but still a little tight, and given current prices you're as well getting a 120GB/128GB SSD rather than a 64GB. That'll also give you sufficient headroom for when Windows gets a little bloated. It's no fun constantly having to clear space for Windows.
     
  9. SchizoFrog

    SchizoFrog What's a Dremel?

    Joined:
    5 May 2009
    Posts:
    1,574
    Likes Received:
    8
    Thanks for the reply.
    Saving 15GB isn't that important, not when a single game install can be as much as 50GB...

    To be honest I want to keep things as simple as possible. Ideally I would just buy 2x 512GB drives or larger but 1) I don't have the funds to throw away, and 2) I would then have 2 redundant SSDs doing very little. So it seems that running RAID 0 will offer a viable solution while pertaining to few drawbacks... that's good to know.
     
  10. schmidtbag

    schmidtbag What's a Dremel?

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    Clean up Windows' crap (like I mentioned in my previous post) and install all your programs to separate drives and you can get by with 32GB. Right now, I'm dual booting Windows and Linux off of a single 64GB SSD, and between them I have roughly 25GB unused. That's plenty for everything I use it for. I have a 500GB HDD RAID setup for my games, and I store my media elsewhere. All non-gaming programs I run from Linux, and Linux apps are very light on disk usage.

    What I suggested was strictly for a cost-effective way of getting great performance. And yes, I agree it isn't fun to clear out space for Windows but it also isn't fun throwing away 50 cents per GB that Windows puts to waste.

    BTW, there are plenty of 32GB SSDs (or smaller) even for SATA III - they're typically used for custom hybrid systems.


    @SchizoFrog
    If you'd like to save money, maybe it would be in your best interest to get a 1TB HDD to store games you don't play as often, or games that only have a half-second loading screen. Maybe not as convenient or fast, but it's a cheap way to keep a lot of stuff.
     
  11. Odin Eidolon

    Odin Eidolon What's a Dremel?

    Joined:
    7 Feb 2009
    Posts:
    231
    Likes Received:
    4
    There's one question not yet answered, and which has been bugging me for a while. Consumer NVMe M.2 SSDs, such as those used in Dell's new XPS13 and 15, are VERY power hungry. Even on light tasks, they seem to suck 2 watts or more, which is a huge amount on a system that uses 10 or less. This has lead to a decrease (according to notebookcheck) in battery runtimes in the new version of the Dell XPS13, for example, despite a slightly larger battery and much more efficient Skylake CPU. Where is the problem? Does it lay in the NVMe implementation, in BIOS bugs, in Samsung's drive (PM951 iirc)? Anandtech reported on the issue, but they did not try to understand it further. As far as I know, all laptops currently shipping NVMe SSDs (all of them Samsung ones) are bugged by this power-hungryness, which for many users makes then unattractive.
     
  12. schmidtbag

    schmidtbag What's a Dremel?

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    @Odin
    I wonder if that's just coincidence. A few months ago I remember reading about how enterprise-level SSDs would lose data integrity if the SSD was left unused for a period of time (sometimes as short as 2 weeks). Supposedly consumer-level SSDs don't have this problem. IIRC, the problem had something to do with power. That being said, I wonder if the fix to this data loss problem was implemented with the NVMe drives. Just a theory.
     
  13. Alecto

    Alecto Minimodder

    Joined:
    20 Apr 2012
    Posts:
    134
    Likes Received:
    1
    Great overview of SSD PCI-E technologies, keep up the good work! :)
     
  14. Dave Lister

    Dave Lister Minimodder

    Joined:
    1 Sep 2009
    Posts:
    880
    Likes Received:
    12
    +1
    I do like the articles like this they make for some great reading.
     
  15. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    This was a result of reading a chart from a slide in a JEDEC presentation somewhat incorrectly. The '2 week' retention period could only be achieved by actively cooling an SD (e.g. to below 30°C) during active use, then actively heating it (e.g. to 55°C) when turned off. Active temperatures below shutoff temperatures doesn't happen in desktop machines (and is difficult to achieve in servers either), so is not a concern.
    Interestingly, because SSDs rely on some temperature-dependant effects during the program/erase processes, they actually BENEFIT from running warmer when active, as long as it's below the breakdown temperature of the NAND (upwards of 100°C, but there's support electronics vulnerable to tunnelling on there so below 80-95°C is reasonable). In terms of data retention, at least. The SSD controller may throttle dependant on temperature, but running an SSD at 55°C-60°C is not something to worry about in term sof drive life.
     
  16. Hakuren

    Hakuren What's a Dremel?

    Joined:
    17 Aug 2010
    Posts:
    156
    Likes Received:
    0
    From my point of view there is place for every standard in one PC... well workstation PC.

    Owning 2x 750s (1 OS/1 workbench), 8x 250GB + 4x 1TB SATA SSDs (RAID 10 of 8 series Adaptec for games and documents I access frequently), 4+4 SSD/HDDs (drivepool with 4x 120GB Intel 530s acting as caching drives and 4xHGST 4TB drives for movies and other very rarely accessed data - also connected to mentioned RAID AIC with expander), external cold backup with 2x8 bay Fantec USB3 enclosures combined into one giant drivepool and one old SSD as a TEMP folder.

    1. NVMe is a beast for OS partition or a workbench where you work with large directories both in terms of size and numbers (thousands and thousands of files - look no further than photography, databases or video manipulation). Somewhere where you can dump some meaningful workload to see benefits of this particular architecture. If you're buying NVMe solely for gaming system then, no offense, you need head examination and fast.

    2. Classic AHCI SATA. Still good for everything around. From OS partition to server. You need NVMe only if you transfer multiple TBs of data every day or weekly. Problem with SSDs is ridiculous pricing. Buying 4 250 GBs drives is cheaper than one 1TB which is quite dumb.

    3. M.2 not a fan of this particular standard in a PC. Good for SFF, NUC and similar applications, but in big desktop more miss than hit. And all M.2 drives are insanely hot. Just what you need , unmovable heat inside the case - most M.2 slots are in crazy places where no cooling can reach.

    4. HDDs. Well these relics of the past are still with us and until SSD hit some big numbers at more reasonable price point (like 0.10 $/GB or less vast adoption of 2TB+ SSD is not going to happen. I have to say, it's painful to watch when copying between NVMe and drivepool. Despite caching it's so slow if I dump really big load of directories. Unfortunately HAMR technology is still in laboratories and nobody really knows if HAMR drives manage to replace HDDs in any capacity before SSDs take over.
     
  17. leexgx

    leexgx CPC hang out zone (i Fix pcs i do )

    Joined:
    28 Jun 2006
    Posts:
    1,356
    Likes Received:
    8
    for normal use or to gamers do not RAID 0, get the SSD Size you need (Probably 500GB or 1TB) and a 2-4TB disk (probblery WD)

    i not buy anything less than 500GB but i like to have a 1TB (but only due to rubbish on my system and a old windows 7 install, but going to at some point buy a 3 year system so its going to be high end something as i really got my life out of this overclocked i7-920 with 24GB of ram)
     
  18. Guest-16

    Guest-16 Guest

    Matt you need to add on U.2:

    No power or thermal limitation, unlike M.2, which will hit power and thermal thresholds (as has been reviewed on the 950 Pro in places)
    U.2 will become more ubiquitous next year as the PCIE premium market expands. There are a few delayed chipsets shown at Computex that were due Q4, now in 2016 AFAIK.
    PC cases support multiple U.2, mostly but only 1xM.2
    U.2 performance will vary by MB design AND by cable quality due to:
    > PCIE not being expressly designed for internal cabling
    > Firmware IO quality and validation will mean a lot
    > Shielded cabling with high quality wiring is required. You could get away with any-old SATA cable, but I'm not sure if that will end up being true with U.2 as cheaper alternatives hit the market.

    Ultimately PCIE card is the best option:
    No possible issue with cabling or signal
    No issue with power or thermal: lots of space for a large heatsink and additional power cabling if required
     
  19. ZeDestructor

    ZeDestructor Minimodder

    Joined:
    24 Feb 2010
    Posts:
    226
    Likes Received:
    4
    Having such a pessimistic view of the whole affair is really missing out on the context in which the various standards were developed.

    SATA-Express is largely intended for hybrid setups, where you want the option to have 2 SATA ports or a single SATA-Express port. IMO it's stupid (just make a clean bre4ak, and use M.2 SATA in laptops where you care), and the industry has thankfully largely ignored it.

    M.2 needs SATA compatibility, because it's intended to succeed miniPCIe, where mSATA drives were quite popular, especially towards the end of it's lifecycle. For cost reasons (cheaper SSDs), SATA using a small miniature PCIe connector will remain popular, and thus it was specced into M.2. There's varying types of M.2 though, with M-key being SATA/PCIe x4, and B-key having PCIe x2, SATA, I²C, USB and a few other options, again, due to flexibility requirements.

    U.2 on the other hand, is largely an offshoot of the server market, where the demand was for a hybrid SAS & PCIe connection, so drives could be kept in the front of servers, in standard, hot-swappable form-factors. Due to it's really rather nice ability to not require drives in the PCIe slots and higher-bandwidth than SATA-Express, it's become really quite popular even in the consumer space for those who need more capacity than what can be offered in M.2 form factor. For people who can afford it though (like very large SAN vendors), it's all about fully custom form-factors to increase density well beyond what 2.5" can provide.

    Finally, as for SATA-IV, it's never gonna happen, purely because the power costs of extending SATA-III are about 20% higher than moving to PCIe, not to mention losing out on the flexibility of just having a ton of PCIe lanes. At the same time, HDDs are simply not scaling all that fast: In 10 years, we've gone from 80MB/s transfer rates to 180MB/s transfer rates, while SSDs went from 200MB/s in 2011 right up >2000MB/s in 2014, and on SATA-III smacked straight into the limitations of AHCI and SATA-III that HDDs are nowhere close to (command queuing and bandwidth being the most obvious ones). SATA will remain for HDDs, but SSDs will essentially have a mass exodus to PCIe. At some point soon though, GPUs will need something even faster than PCIe, so watch out for NVLink and competing standards.

    I have over 300GB of programs installed on windows, then a second OS, and on top of that, some games (LoL, PoE, TF2) I really rather keep on SSD so my PC doesn't grind to a complete halt when, say, patching, and a pile of small files in my code repositories and general synced directories. On top of that, you generally don't want to remove your page file so you don't lose the ability to see BSOD reports and dumps.

    As of right now, I have 2 800GB Intel S3500s in my desktop, with the rest of the storage on my NAS (pushing over 20TB right now...) connected via 10Gbit ethernet.

    If you want performance, you need to pay the cost in power. Idle power should be way better than 2W though (the SM951 claims 2mW PCIe L1.2 sleep), but I think the vendors are having some issues with getting the full PCIe L1.2 chain working reliably (or it got disabled somewhere in the firmware customization stage). Give it another gen or two (or a couple of firmware updates if you're real lucky) and things should be much better.

    Part of it I suspect is that the drive is simply not being shoved into L1.2 sleep fast enough (too large of an idle timeout combined with repeat, regular small access perhaps?), so it stays awake and at idle power for much longer than it would otherwise be, something better disk caching should fix.

    PCIe was designed for cabled setups (including external PCIe, as used in stuff like Tesla C2050) way back in '05 when it first launched.

    You also seem to be quite mistaken with cabling: high-speed cabling is a known quantity these days, and no, you cannot use any old SATA cable and get SATA-III performance. By and large, for good SATA-III performance, you need to use cables that run twisted twin-ax cables for the SATA data lines rather than straight-through, untwisted, flat cabling commonly used in old SATA-I cables.

    On top of that, internal cabling is much easier to deal with than external cabling, of which we have for PCIe (Thunderbolt and others).

    As of right now, here's how things stand:

    SATA-III: 6GT/s for 600MB/s (4.8Gbps) usable
    PCIe 3.0 x1: 8GT/s for 985MB/s (7.88Gbps) usable
    SAS3: 12GT/s for 1200MB/s (9.6Gbps) usable
    PCIe 3.0 x4: 32GT/s for 3940MB/s (31.52Gbps) usable
    SAS3 x4 (4 lanes in a single SFF-8639 cable): 48GT/s for 4800MB/s (38.4Gbps) usable

    In conclusion, don't worry about cabling, it'll be fine for the most part, and U.2 will probably be the most successful form of PCIe storage for servers, desktops and other high-performance stuff, and M.2 for laptops.

    PCIe Add-in cards will largely be reserved for super-expensive, super-high-performance stuff that needs more than PCIe 4x, though there will probably be a decent selection of U.2 drives available in PCIE 4x add-in card form. Don't expect rthose to sell much.

    As a side note, you can get (expensive, like >$50/m for upto 15m TwinAx) copper-based external cables that can hit 100Gbps in each direction for 100Gbps Ethernet and InfiniBand usage right now, though at some point soon, the entire ecosystem will shift to fibre, even for <10m setups.

    As a side note, you may want to start thinking of how to fit fibre into your house for ethernet sometime in the next 10 years though: I reckon 10GBASE-T (10Gbps RJ45/8P8C) has a decent chance of being the last of the cheap copper cabling (CAT-6/6A/7) to make it to market. By and large, nobody like 10GBASE-T because of it's higher power and higher latency compared to fiber or TwinAx direct-attach cabling. I've personally started my move to SFP+ cabling, using direct-attach copper TwinAx for short (<7m) run, and plans for fibre on longer runs for the high-performance stuff.

    On the subject of firmwares (and controller for that matter), it's already serious differentiation between SSDs, with many of the cheaper drives having.. shall we say... less than excellent firmwares (especially if you use Linux). The result is that for a lot of people, only Intel, Samsung and Micron/Crucial are worth buying.
     
  20. Guest-16

    Guest-16 Guest

    Yes PCIE can do external, but needs to be shielded and limited in length, unless you use repeaters. It doesn't have the advanced error correction like USB 3.1 PCIE-SIG's main aim is costcostcost, not quality.

    I'm not talking server grade which is validated, I'm talking the inevitable cheap stuff that'll make it into stores/bundled. You can use the 5p SATA cables you got with your motherboard, you don't need holy-water-dipped ones was my point. Rather than digging out some from 10 years ago :eeek:

    Unfortunately PCIE cards will likely end up being used for high-end only, which is another mistake by PCIE-SIG and SSD designers imo who only design for industry. PCIE is elegant for home users when we have several PCIE slots, U.2 is not.

    For server the ecosystem above 10G will shift to Fiber [and Intel has gone quiet on silicon photoics, so that's still a 'pipe' dream], but copper will fill everything below it due to cost/infrastructure; 2.5/5/10G. Unfortunately NBase-T on 5e/6 doesn't have home/soho in its sights. Mistake Mistake Mistake Mistake Mistake Mistake by ignorant working group.

    [We should also be moving to single 12-24V high-amp input on PCs but that's another rant for another day].
     
    Last edited by a moderator: 7 Dec 2015
Tags: Add Tags

Share This Page