I decided to set myself up a NAS and move to 10Gb, this was all working great, I was able to max my 10Gb ports on transfers etc and I was quite happy, my machines all have dual 10Gb NICs and initially I teamed them and could saturate two links with multiple transfer, great. Then after lots of dicking about with network and Oses etc I discovered that if I didn't LAG them the systems would automagically create a 20Gb full bandwidth link and I could transfer to and from my NAS box at 20Gb or ~2GB/s this was more than I expected. I had an issue with the mobo and swapped it out, now I can't seem to replicate this connection speed through Windows or Ubuntu as I have no idea how it worked in the first place but I'd like to get it going again. Through Google Fu I discovered this is expected behaviour and something to do with SMB multi channel but I can't seem to find a clear guide to make my system do it again, not that I am unhappy with 10Gb network speed but 2GB/s NVMe transfers over the network is quite nice and I'd like to do it again. Anyone know what I should be doing to get this to happen either Win10 to Win10 or Win10 to Ubuntu. Cheers Sandy
SMB Multichannel should be on by default for Windows 10. Given it isn't working you can try re-enabling using the following in Powershell. Code: Set-SmbServerConfiguration -EnableMultiChannel $true Follow that with Code: Set-SmbClientConfiguration -EnableMultiChannel $true Re-test and see if it works. On Ubuntu I haven't a clue but @Gareth Halfacree might do?
I'd only be DuckDuckGoing, to be honest - it's been many a year since I had to care about supporting SMB clients on a Linux-based network!
It's very weird everything suggests it should doing the business but its not, I'll play some more after work, I'd have been perfectly happy in 10gb land if I hadn't had a taste
On the two Windows hosts, what do you get if you run Get-NetAdapterRss -Name "*" you should get something like (on both hosts):: Code: Name : Slot06 x16 2 InterfaceDescription : Intel(R) Ethernet Converged Network Adapter X540-T1 Enabled : True NumberOfReceiveQueues : 8 Profile : Closest BaseProcessor: [Group:Number] : :0 MaxProcessor: [Group:Number] : : MaxProcessors : 16 RssProcessorArray: [Group:Number/NUMA Distance] : IndirectionTable: [Group:Number] : If your "Enabled" is not True then you'll not be able to do SMB Multichannel over both NICs (It's the thing that lets SMBv3 spawn multiple TCP connections per copy). If it's supported, you should be able to enable it within your network driver settings. After that, as Gareth says, verify that MutiChannel is enabled (on both hosts) Code: Get-SmbClientConfiguration | Select EnableMultichannel EnableMultichannel ------------------ True
Never got around to sorting it, too much work cropped up and so it became low priority, should have things wrapped up soon though and I can get back to playing with it.
Given that I'll probably forget about this thread - drop me a PM if you get stuck if you get back to playing with it.
OK, got it working again, seems I had enabled teaming again, obviously a brainfart moment and forgot as it is not what you want to do, SMB multichannel works without link aggregation of any sort, the moment you turn that on in either the switch or driver it breaks it, it just needs everything to be setup in the most simplest fashion to work, everything plugged into Hub and the NICs served with addresses from DHCP, simples. So I am now getting 20Gb transfers again, well actually I am not, I seem to have some performance issue this time around and the most I have seen is 1.7GB/s whilst doing a 100GB compressed file transfer. but it is yoyoing from 1 to 1.4GB/s after its initial high burst, so need to look into what the issue is, it was rock solid ~2GB/s previously, might just be the fact I have gone from Debian to Windows....but I have been messing with the system drive set up I may have borked something in the process but at least have one step working again.