Hi, Have a bit of an odd issue with NVMe and SLI. Hardware is: Asus X99 strix Intel i7 5930K Samsung 950 Pro nvme M2 ssd 3x Gtx970 Asus strix oc SLI After a long time having this running on air on a single gpu and a build project going astray I've finally put it in the case under water, I have had this setup running on triple sli before to test although it was a long time ago. Currently have the issue that it will boot fine on 1X GPU and 2x SLI but when all 3 GPU are used the ssd drops out of the bios and is no longer recognised. I'm sure it must be a setting off somewhere as it's worked previously. All GPU are in their correct slot as per the manual, and can only run on 8x/16x/8x so can't use all 40 pci lanes. From what I understand 8 are controlled by the PCH chipset so on board pci has no access to them so it's odd that the extra GPU is interfering. I can wiggle out 2 cards easy enough but obviously swapping them around is more difficult due to the water cooling. I have tried: Disabling CSM Forcing x8 on the card that is usually x16 Updating driver Motherboard is on latest bios anyway Reseating gpu Wiggling the bottom 2 cards out of pci slot, boots fine, push them back in, same issue Anyone have any suggestions?
Do you have to disable any onboard devices like usb/sata, or a toggle for nvme? Having run 3 cards for the past couple of years, I'd say either ditch the third or ditch the lot of 970s for a better specced card or two. Probably not what you want to hear, nor a very helpful answer, sorry.
Not particularly helpful no, well aware of the limitations of triple sli and with it looking to be phased out with the minimal 1000 series support saw this as my last opportunity to do a build with it. It is supported by the motherboard and should boot without forgetting it has an M.2 drive though, regardless of how good it might be.
It won't be your cards are interfering, none of the card slots share bandwidth with M.2 and U.2. Only the M.2 share bandwidth with U.2
ASUS website says: "3 x PCIe 3.0/2.0 x16 ( x16, x16/x16, x8/x16/x8 mode with 40-LANE CPU; x16, x16/x8, x8/x8/x8 mode with 28-LANE CPU)" Simply meaning your board/chipset can't split the PCI lanes into 4 groups. This is a guess, but I'd say it's a good one.
It doesn't matter how many times it has to be split, as long as the traces are allocated on the board AND they're available from the source, be it CPU or ICH. Have you tried just putting the top and bottom cards in to see what it does? Have you checked to see if there's a switch on the board that moves between 2-way and 3-way SLI? I know the X99 Deluxe has that switch, which is why I'm asking. Finally, I read that rebooting over and over eventually got the drive seen. Have you tried this just to see if it works?
No such switch on the strix, and it's quite happily lighting up all 3 slots to point out where to fit the cards so I can't imagine that's the issue. Top and bottom card only works fine, I actually had the incorrect slot originally as I copied a mock up of the tubing without checking slots, so lower slot definitely works. It seems to have pcb temp reading in bios from cards 1&2 only which is also odd Rebooted quite a lot troubleshooting so yeah I'd say I've tried that.
I would have to say it might purely be a limitation of the board in that case. The bottom PCI-E lane is provided by the PCH, not the CPU, which might be causing your issues.
Being entirely separate controllers (there are only 36 lanes available to pci due to nvme using the other 4) I can't see how that would be the case but have emailed Asus support to ask. The only way this completely blocks out the available pci lanes is running 16 16 8, which it doesn't do in sli. The m2/u2 option isnt there in latest BIOS
1 2 and 4 x16 slots as per manual. They're also conveniently lit so it's hard to get it wrong (once you realise you've put the cards in with your mock up tubes without checking it)
Options are Launch CSM >AUTO >enabled >disabled Boot device control >uefi and legacy oprom >legacy oprom only >uefi only Boot from storage devices >legacy only >ignore >uefi driver first Boot from pci-e >uefi driver first >legacy only Tried what seemed to be most friendly settings of enabled, uefi and legacy, and uefi driver first and got nothing, also changed the secure boot os type to other os instead of windows uefi and still nothing.
Well, finally tracked down the issue with this, after giving up on SLI and swapping out the triple 970 setup for a 1080 ti due to the lack of support on SWBF 2, it did exactly the same thing again, no NVMe SSD recognised on boot. Much frustration! However without the cards in the way now I could fiddle with the M.2 board without pulling the water loop apart, unscrewed it to wiggle it around and it reseated with a 'click'. No idea why it was fine on 3 cards and not 2, maybe some sort of voltage/signal strength thing or a resistance difference on a PCI lane or something combined with the poor connection on the M.2 slot? But rebooted without touching anything else and worked fine. So moral of this story is, do the basic reseating etc troubleshooting, even if it means pulling a loop apart or make sure you build it right in the first place!
Bit of a bugger to find that out but you've probably ended up with a quicker system using a 1080Ti for the most part.