Discussion in 'Article Discussion' started by bit-tech, 22 May 2018.
From what I remember this was pretty clear for the MSI X370 Carbon Pro.
It's top M.2 was PCI3.0 x4 or SATA6 and the bottom M.2 was PCI2.0 x4 or SATA6, disabling either PCI port 6 or SATA6 port 6, and the heatsink was on the top M.2.
This is hardly an AMD-specific issue. If anything, I'd say Gigabyte has a tendency of questionable decision making on their boards' M.2 slot configurations.
For example, I have a Gigabyte Z370 Aorus Gaming 7, which has 3 M.2 slots, and the arrangement is as follows:
Slot 1 (M2M_32G) - M.2 110mm, includes heatsink. Shares bandwidth with 2 SATA ports. So SATA3 ports 4,5 become disabled if any M.2 SSD (NVMe or SATA) is installed.
Slot 2 (M2A_32G) - M.2 110mm, no heatsink. Does not share bandwidth, so SATA3 port 0 become disabled only if a SATA SSD is installed - no issues with NVMe drives. However, this is near the top PCIe x16, so would be underneath the GPU if a dual-slot GPU is used.
Slot 3 (M2P_32G) - M.2 80mm, no heatsink. Shares bandwidth with the bottom PCIe x16 slot, which is x4 electrically. Does not support SATA M.2 SSDs, only NVMe. Can only be used if the PCIe x16 slot is unpopulated.
So, to summarize - Slot 1, the most ideally physically located slot (and presumably the one Gigabyte expects everyone to use for their primary drive, as indicated by the heatsink being installed on it by default) will immediately disable 2 SATA ports with any sort of SSD installed. Slot 2 is actually the most ideal in terms of PCIe lane splitting layout, but the physical location of being under the GPU is very much less than ideal - NVMe SSDs can get quite hot, and being placed under a hot GPU does it no favors. And then Slot 3 is both physically less flexible (80mm instead of 110mm) and only supports NVMe drives, and also at expense of a PCIe x16 (x4) slot. Taken together, and it's a mess.
It's not a trait unique to AMD... trying to work out what slots get what bandwidth and when on intel will also get you pulling your hair out...
*Insert shameless promotion of glorious Threadripper here*
Jokes aside, I think all the high price premium mainboards based on mainstream chipsets (like the Asus Rog Maximus IX Extreme, AsRock Z270 Supercarrier etc) have seriously distorted consumer expectations about what mainstream platforms are supposed to be (the hint is the word mainstream)...
In other words, I suspect the HEDT like price of some mainstream boards has blinded people to the point where they except them to provide HEDT like functionality, reality be damned.
I wonder if this why people are clamouring after mITX and mATX boards... less room on the board means less superfluous **** fighting for lanes with **** you might want to use.
For me at least that is 100% accurate.
Its not AMD at fault here, everything you descrive is purely Motherboard manufacturer and also happens on the Intel side.
But anyway, if you don't know the limitations of the platform you are buying into, and struggle to read a manual should you be buying a platform at all?
CH7 PCIe choice is laid out clearly in the manual, you just have to read it, chapter 1 PCIe Operating modes. Doesn't seem that confusing to me, in addition the thermals between GPU and CPU probably explains the heatsink, the bottom slot will be a lot cooler and therefore likely not needed on the lower slot, a bit cheap arse for such a premium board though, both slots should have one.
Indeed, I have lanes a plenty and not confusing as I simply wouldn't buy SATA M2, pointless.
There is the issue that both generations of Ryzen chipset do not support PCIe 3.0. At all. All lanes from the chipset are PCIe 2.0. On Ryzen, the PCIe 3.0 lanes you have available are:
- 16x from the CPU, which can be bifurcated down to x8/x8 (e.g.g dual GPU) or x8/x4 (e.g. second CPU connected m.2 drive)
- x4 from the CPU, usually allocated to an m.2 slot
- x4 from the CPU that is ONLY available if the chipset is not present (e.g. the A/B/X300 'un chipset'). So this is effectively unavailable as occupied by the chipset.
Compounding this is that AMD do not make any datasheets for the Ryzen chipsets available, so only motherboard manufacturers under NDA will even know what the possible lane allocations available are. On the Intel side, the datasheet gives you a nice allocation chart (200 series used as an example):
e.g. you can see if you want to do 3-way NVME RAID (or NVME RAID 0 plus Optane, or whatever), you're going to be giving up 2 or 4 SATA ports depending on where the motherboard manufacturer has routed them (I normally see the 'b' ports used to allow 2 RAIDable NVME slots to be offered).
::EDIT:: This doesn't stop manufacturers from doing Dumb Things, but it does mean you know what the chipset is capable of in the first place.
Again, this is motherboard manufacturer issues, your average consumer should never need to find a datasheet, every motherboard manufacturer should be capable of writing a comprehensive manual detailing how their product works.
In many ways it doesn't matter if Intels or AMDs chipset lanes are PCIe 2 or PCIe 3 as they are connect to the CPU via a wet piece of string such that you'd max out the bus if you tried to use it all simultaneously.
You want all those lanes and to use them buy a good platform, buy X399.
Guys I think you're reading too much into the title here. Just to be clear, I'm not pointing the finger at AMD at all, but at motherboard manufacturers for not clearly identifying how their boards' M.2 ports are configured as there are clearly differing configurations out there both in terms of bandwidth and SATA support. I also acknowledge and mentioned in the article that it isn't just an X470 or even AMD board-specific issue - for example, Asus listed SATA M.2 SSD support on some of its boards a couple of Intel chipsets ago when they were actually only compatible with PCIe devices, plus until recently, some of its boards wouldn't auto-detect M.2 SSDs properly and left the single M.2 port at X2 speed. You had to manually change it to X4 in the BIOS, which wasn't the case with MSI and Gigabyte at the time.
That's pretty much what I'm getting at. Everything else is straightforward on boards these days for the most part, but with every board review I do, especially X470 at the moment, I'm having to dig through spec sheets and manuals just to see which M.2 port I should be using for speed tests and which one I recommend those that want to use either a PCIe or SATA M.2 SSD should use. In my mind, if you have two ports, one port should offer PCIe X4 3.0 support without stealing bandwidth from other slots, with that port being the one to include a heatsink, while a second port should offer SATA M.2 SSD support, so you have the option of using both simultaniously. End of story and this should be clearly stated in the online specs. This is also especially important on mini-ITX boards too, where using SATA M.2 SSDs is very useful to cut down on cable clutter (especially as they cost pretty much the same) while you might want to use a second PCIe SSD for your games and OS.
Not as much as you'd think[/url]. Outside of raw sequential transfer to RAM, you'd be hard pressed to tell if a drive was connected directly or over DMI.
Heatsinks for m.2 are as cosmetic as ram heatsinks, so physical position for m.2 slots is more important than whether it has a redundant sculpture stuck over it or not (e.g. if a slot is between two PCIe slots, it;s going to be a pain to get to).
This is your problem because your mind is wrong, you are making assumption on how you believe it should be and not what is the best layout for a typical case, for example, you are saying the heatshield should be in the faster performing slot rather than the slot that sits in the hottest part of the board next to GPU, VRMS and CPU.
Case air flow is typically in from the front/bottom out through the rear/top.
You can install NVMe without taking bandwidth from other slots on CH7 if you install as they suggest in the manual.
It's exactly as much as I think, 50% limitation on the Intel board through PCH vs CPU based RAID for file transfers.
What is the purpose of PCIe3 over PCIe2, extra bandwidth, if you are not getting that bandwidth then what is the point, they could make them PCIe4 for even more amazing spec point on brochure but you'd still have the same problem, Its like a 4 lane motorway converging to a single cobbled track.
How well do you think the rest of the stuff performs on PCH once you have saturated that bus?
Edited, but ^This
OK, what workload do you have where, on a consumer board, you need to read at more than half a gigabyte per second, while also reading from another device too? Remember, PCIe (and DMI) are bidirectional, so even if you manage to saturate your read bandwidth that has no impact on your write bandwidth. Direct device-to-device copies using DMA can bypass the DMI link anyway, so no need to worry about 'saturation' with regular file copies.
It’s not a question of need, more of time, I use a multi camera setup in my racecar and pull in a few streams, moving many GBs of high bit rate video data, it’s time consuming, to be honest even my 1080p cams are capturing 100Mbs streams, so aren’t lightweight in terms of file size, shifting that lot around my system and to my NAS etc.
It's all quite data transfer heavy in and out, used to see contention all the time on my z77, no matter how much faster my i7 was than my current CPU in gaming performance :cry: obviously with TR it’s now no problem, I have no such limitations, this all a happens in the background whilst I do other stuff editing/gaming etc, oooh the powah.
So you're saying that you'd rather have the heatsink on a slot that offers only SATA support or reduced PCIe bandwidth if it was going to have more of an impact there? In my mind that's totally wrong - firstly while the heatsinks do lower temperatures, the heatsinks are more for aesthetic reasons. As you can see from our own benchmarks they make next to zero impact on performance. Therefore I'd want to be able to use my Samsung 960 Evo it in the fastest slot regardless so that's why it makes sense, if you're going to have one heatsink, to have available in that slot, not in a slot that will massively bottleneck it or is limited to SATA SSDs. Credit to Asus with the CH7, though, as the heatsink can be swapped between ports, but that's a rare feature.
I'd never accept a 50% drop in speed from 3200MB/sec to 1800MB just so my Samsung 960 Pro runs 10'C cooler. The impact of thermal throttling is going to be a fraction of that so it's a totally counterproductive exercise. I want the heatsink where the SSD is best-placed NOT the other way round and as I'd argue that heatsinks on SATA SSDs are also pointless, that the priority should always be to have the heatsink available to the M.2 slot where you'd want to have your PCIe SSD in the first place, ie the fastest one and not a port that trashes its bandwidth.
I also never suggested where the ports should be placed or that the heatsink should be placed for best effect thermally anyway, only how the bandwidth and SATA support should be allocated. However, it would obviously make no sense to have the heatsink on a port that only supported SATA SSDs or was limited to PCIe 2.0 when the other slot is where everyone should be installing their Samsung 960 Evo.
I completely agree that the heatsink should be placed where it's best-used and on the CH7 that's where it is out of the box.I get that. The problem, there, though, is that slot will steal bandwidth from your graphics card and that is not well documented. The manual is unclear and only states that when using both slots, that the primary 16x PCIe slot will fall from x16 to x8, NOT when using a single SSD in that heatsink-equipped slot. The problem here is that, sure, place the heatsink where it's best-used, but unfortunately that port will cut your 16x slot's bandwidth in half and that is not mentioned.
...unless you've got a Ryzen G with Vega jobbie then you straight up can't use the slot at all because it can only run the 16x PCI-E slot at x8 normally so there's no bandwidth for the m.2 to steal.
Separate names with a comma.