It seems that PCIe Gen 3 is starting to bottleneck M.2 PCIe NVMe SSDs. See that Gen 4 is slated to be released in early 2017. Unfortunately, the first generation of AMD’s “Summit Ridge” processor family likely won’t support PCI Express 4.0, nor will the first rollout of Intel’s seventh-generation “Kaby Lake” desktop processors. http://www.digitaltrends.com/computing/pci-express-4-device-spotted-intel-developer-forum/ Does that mean we need to wait for Coffee Lake to see gen 4 and a new batch of faster M.2 PCIe NVMe SSDs or could it be even longer?
The new HEDT platform from Intel launching this year won't support it. The new Xeons coming this year won't support it. The HEDT Ryzen chips won't support it. 1st gen Server Ryzen won't support it. Which essentially means we will have to wait at least until 2018 for Cpu / mobo support, maybe even longer. The only real sign of PCIe 4 is coming from server networking atm, like for example with the Mellanox ConnectX-6 EN IC which supports PCIe 4: http://www.mellanox.com/page/products_dyn?product_family=268&mtag=connectx_6_en_ic But of course that doesn't do much without a mobo to plug it into that can fully exploit it. As for faster M.2 SSDs, only Samsung are maxing out the PCIe 3 x4 interface currently used for M.2 and even that only applies to sequential read speeds. And I suspect Samsung doesn't really care about that limit, because the biggest threat to their SSDs will come from the Intel 900P Optane SSD which will offer significantly better worst case scenario performance, so Samsung will have to focus on that rather than attempting to push sequential read even higher.
It won't be until Coffee Lake/300-series chipsets at the earliest. Might not be until Cannonlake given Coffee Lake is just another refinement of Sky/Kaby Lake. Likewise you'll probably need to wait for Zen v2 before you see it from the red corner.
I'd suspect at this stage zen2 would still be 3.0 and then if they go zen 3 or zen+ they would change to pcie-4.0 and ddr5 in one move, change chipset and keep the am4 socket. That is assuming AMD won't make customers change socket every other year like Intel.
pcie 3 isn't a bottleneck its the restriction on lanes for lower end mainstream platforms, if you have a HEDT system with a 40 lane chip you have quite a bit of bandwidth available. AMDs server variant of Ryzen has 128 lanes, this stuff just needs to filter down but there hasn't really been a need in the home market until things like nvme blew up.
I'd tend to agree with sandys that it's the restriction on the amount of lanes that's the problem, I mean what's up with Intel only offering x16 lanes in their mainstream CPUs?
I know, I was eyeing up that Kaby lake X i7 on x299 thinking it would be cool, more lanes, RAM bandwidth and an upgrade path but sounds like its being hobbled out of the box with 16 lanes and Dual channel RAM, pointless. Maybe AMD can shake things up with their HEDT platform, though unlikely they seem to like being a cheap knock off of Intel.
http://pcisig.com/specifications/review-zone ^^ as the gentleman said above - the spec isn't even set yet
I've seen some articles about GPUs supporting, I didn't think it would be such a long wait on the Motherboard side. AMD's Vega 20: 32GB HBM2, 7nm, PCIe 4.0, 2018 release THE SLI BRIDGE MAY BE ANCIENT HISTORY IN 2017 DUE TO PCIE 4.0’S HUGE BANDWIDTH
AMD did the smart thing with Zen and pulled a PCIe 3.0 4x (~4GB/s) right off the CPU for M.2, whilst intel pulled a complete derp and on s115x hangs theirs off the PCH, which has to share the DMI (same bandwidth as PCIe 3.0 4x) with USB/SATA/LAN/etc. Personally I think 4GB/s will be enough for the foreseeable future, although when PCIe 4.0 makes it into silicon in 2-3 years, the M.2 lanes will get the upgrade, just as a matter of course.
You don't really need more bandwidth, you need more small file performance, lower latency and more IO. That can only be achieved by using something other than NAND - you need to approach DRAM performance, which is where XPoint/Phase Change/etc comes in.
On the other hand, the Zen chipsets only break out PCIe 2.0, so if you need to attach high-bandwidth PCIe devices (e.g. storage controllers) you're SOL. Also no PCIe RAID. This can also end up with some weirdness if you have peripherals with DMA expecting to access an SSD (e.g.). Those now need to jump the CPU-PCH link and talk to the CPU, rather than bypassing it and going straight through via the PCH. Mostly a problem for GPU compute workloads, could be an issue in some games that aggressively stream assets direct from the drive rather than from system RAM.
Beyond PCIe you want to move to storage-on-DIMM, like upcoming Xeons will have. DDR5 spec includes it.
Intel are using HMC on the Xeon Phi package for Knights landing (Intel co-developed HMC with Micron). Like with HBM, HMC needs to be soldered onto the same board as the host accessing that memory (HBM needs an expensive silicon interposer, HMC can use a PCB). Unless people are willing to go for fixed non-upgradeable memory, we won't see HMC or HBM as the main system RAM (outside of special cases like ultra-compacts that already use soldered DDR).