EDIT: Having changed direction towards vSphere, I no longer have an iSCSI queries, and insead of vSphere queries... https://forums.bit-tech.net/index.php?threads/vsphere-noob-questions.355034/page-2#post-4596224 Despite spending the last 13 years working in the storage industry, over half of those as an admin or architect, I have zero hands-on experience of iSCSI. I've always kind of looked down on it as big boys use FC or IB for block storage, however this is my home setup I'm talking about here, so will happily concede the big-boy status for the cost-effective-boy one. I'm after some pointers to hopefully limit trial and error as much as practical. The basic setup is as follows... Server 1: Win 2016 DC, Hyper-V, Intel X520-DA2 NIC (today) Server 2: Win 2016 DC, Hyper-v, probably the same NIC (future) Switch: Ubiquiti US-16-XG, front end VLAN, iSCSI VLAN Storage: Synology RS1219+, 2x10Gb (data), 4x 1Gb (one for management, 3x for nothing) What I want: - iSCSI boot for server 1 - iSCSI boot for server 2 - Shared iSCSI datastores for VMs (seems there are too many caveats for SMB3 to be practical for my uses) What I can't quite figure out right now, and do bear in mind that my head is trying to work in FC mode and give me the benefit of the doubt if some of this is way off the mark... - Setup steps I'm guesstimating are something along the lines of: 1. Do some dickery in the first instance to configure at least one port for iSCSI boot, USB boot with the intel config utility 2. Create iSCSI targets on the storage - map all LUNs to one and mask appropriately? Or three separate targets? 3. Carve out luns 4. Point NICs at iSCSI targets 5. ... 6. Profit - Each server, and the storage, has a pair of 10GbE ports - does this limit me to a single port for SAN and a single port for network, or can I VLAN tag traffic to make use of both a LAG on the front end and iSCSI MPIO for storage? - Anything else that I don't strictly need to do, but should? Either for security, performance, flexibility, or technical correctness?