OK, so the nvme to pcie adapter isn't awesome. It works. Kind of. There are issues though. It'll occasionally drop the connection and won't reconnect without a reboot, and about one in five cold boots wouldn't initialise the 10Gb NIC in the slot. Anyhoo, I found a three slot mobo with a 4x pcie slot that fits in the M1. The Gigabyte B365M DS3H So far so good.
So, I've been playing around on my NAS with a couple of 1TB SSDs that I'm not currently using... A single SSD in the NAS is hitting 480+MB/s when moving large files like video or OS images. Two of them in RAID0 produce 800+ MB/s. I have strong but entirely unjustified urge to buy four large capacity SSDs and array them in RAID10. I think I need an intervention because this rabbit hole looks bloody deep.
Would knowing that I've spent circa £5000 on "pimp my storage/network" help you step back from the precipice? That said, I've bought/sold/designed/built/looked after storage systems in the tens off millions too, so fair to say the hole runs a lot deeper.
Yes, I used to run a RAID10 array with spinning disks, but that was on a GbE link so the disk transfer rate saturated the link speed - the only tangible performance benefit was the rate rarely dropped greatly below link speed because the drive performance was improved, even on smaller files. The argument now is that striped SSD performance could almost saturate a 10GbE link in some use cases. I didn't say it was a strong argument - current performance is meeting the need I set before I started this, but the shiny thing impulse is an ever-present battlefront for me.
Yeah, that worked. I only bought two 2TB SSDs. They arrived today. But... Write test isn't as impressive - but I messed up the screen capture and started an array rebuild before I realised