1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Other vSphere Noob, questions

Discussion in 'Tech Support' started by Mister_Tad, 17 Dec 2018.

  1. Mister_Tad

    Mister_Tad Super Moderator Super Moderator

    Joined:
    27 Dec 2002
    Posts:
    12,262
    Likes Received:
    748
    EDIT: Having changed direction towards vSphere, I no longer have an iSCSI queries, and insead of vSphere queries... https://forums.bit-tech.net/index.php?threads/vsphere-noob-questions.355034/page-2#post-4596224



    Despite spending the last 13 years working in the storage industry, over half of those as an admin or architect, I have zero hands-on experience of iSCSI. I've always kind of looked down on it as big boys use FC or IB for block storage, however this is my home setup I'm talking about here, so will happily concede the big-boy status for the cost-effective-boy one.

    I'm after some pointers to hopefully limit trial and error as much as practical.

    The basic setup is as follows...

    Server 1: Win 2016 DC, Hyper-V, Intel X520-DA2 NIC (today)
    Server 2: Win 2016 DC, Hyper-v, probably the same NIC (future)
    Switch: Ubiquiti US-16-XG, front end VLAN, iSCSI VLAN
    Storage: Synology RS1219+, 2x10Gb (data), 4x 1Gb (one for management, 3x for nothing)

    What I want:
    - iSCSI boot for server 1
    - iSCSI boot for server 2
    - Shared iSCSI datastores for VMs (seems there are too many caveats for SMB3 to be practical for my uses)

    What I can't quite figure out right now, and do bear in mind that my head is trying to work in FC mode and give me the benefit of the doubt if some of this is way off the mark...

    - Setup steps I'm guesstimating are something along the lines of:
    1. Do some dickery in the first instance to configure at least one port for iSCSI boot, USB boot with the intel config utility
    2. Create iSCSI targets on the storage - map all LUNs to one and mask appropriately? Or three separate targets?
    3. Carve out luns
    4. Point NICs at iSCSI targets
    5. ...
    6. Profit

    - Each server, and the storage, has a pair of 10GbE ports - does this limit me to a single port for SAN and a single port for network, or can I VLAN tag traffic to make use of both a LAG on the front end and iSCSI MPIO for storage?

    - Anything else that I don't strictly need to do, but should? Either for security, performance, flexibility, or technical correctness?
     
    Last edited: 23 Dec 2018
  2. saspro

    saspro IT monkey

    Joined:
    23 Apr 2009
    Posts:
    9,283
    Likes Received:
    231
    Been a while since I had to touch iSCSI but I'm sure it went something like this.
    Step 1. Create your underlying RAID set
    Step 2. Add volumes for each drive you need (You may need to add LUN's at this step)
    Step 3. Create iSCSi targets (you can either use one global presenting all LUNs or one pewr LUN, depends on security need), might be worth giving boot LUNs their own target & setting CHAP on them)
    Step 4. Configure hosts to point at iSCSI targets & set boot order
    Step 5. Realise the above was a pain in the backside and change the boot LUN's to datastores with a VHD on them (this step may be optional)
    Step 6. Profit

    You can VLAN both NIC's for storage & data. Just remember to configure something like StorageIO if using VMware to make sure one part doesn't saturate the NIC & impact the other (I'm sure Hyper-V has similar but wouldn't touch it with a bargepole).

    Don't forget MPIO works best with multiple subnets & you don't need a gateway on the iSCSI network (should be non-routable)

    Consider 2 1GB NIC's (each with their own subnet & VLAN) for management (to follow best practice).

    Also consider adding a pair of 1GB Nic's to each host for management traffic.

    If you're using clustering you may need a small witness LUN (1GB or so)

    That's all I can think of so far.
     
  3. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,029
    Likes Received:
    481
    For MPIO you need to create two new VLANs and two non-routed subnets. Usually you’d lock one storage VLAN per host NIC otherwise if you trunk across both NIC you might end up with both storage VLAN sharing a single NIC. It’s up to you how you nail it in the operating system of course.

    You’ll need to make sure the ISCSI initiator on the host is using the appropriate strings recommended by synology. I’ve seen problems with performance and stability if it’s not spot on.

    I’ve never done boot ISCSI. I’ve only done hosts with a small local boot volume and then storage LUNs after. And I’ve only really done it from the network perspective in honesty!
     
    Last edited: 18 Dec 2018
  4. Mister_Tad

    Mister_Tad Super Moderator Super Moderator

    Joined:
    27 Dec 2002
    Posts:
    12,262
    Likes Received:
    748
    I knew you two would come to my rescue :D

    Security "needs" are fairly lax, but I'd still like to follow security best practices where it's feasible.

    See this is why I asked... I didn't know this was doable for physical servers. Reading up on this now, thanks.

    I'd rather be using VMware, alas I've not found means of legitimate and cost-effective acquisition of ungimped ESXi, and do have a legitimate entitlement to W2016 editions that is very cost effective (i.e. no cost to me at all)

    I know a handful of vExperts, but I'd feel skeezy even hinting to any of them... "hey you know that reward you have for that thing you work hard for, gimme it plz"
    That said I've just pinged a buddy at VMware to see if there are any options, not sure why I didn't think of this before. But I'm not sure that will bear fruit... much like I can get perpetual lab licensing for my company's software, but probably wouldn't go handing it out to friends and family.

    I'm doing things a bit sloppier than I should be at the moment, simply for a lack of ports - I have a US-16-150 that's full (though I can free up a few ports moving them to the XG) and a US-8 hanging off that as a temporary measure, that's also near full. A US-48-500 is on the shopping list and will do me basically forever and allow everything to be technically correct, but I'm not sure I can bring myself to stomach that just yet, as this little project has been catastrophically expensive so far.

    Good tip thanks. This is exactly the sort of thing I'd get wrong out of ignorance and tear my hair out trying to unpick at a later date!
     
  5. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,029
    Likes Received:
    481
    I tried not to duplicate points that saspro has already made and better than I had ;)

    In my experience, the C: drive volume for your guest is usually on one LUN, with applications installed to a D: drive volume on another, high performance LUN. It's kinda up to you how you want to split it.

    Ref VMware - you can use ESXi for free you just can't manage it in vcentre. So just manage your two hosts individually.

    Having said that I know quite a lot about configuring Hyper-V using the in-built Windows NIC team, setting up your vNICs etc, so if you need any specific advice on that just shout.
     
    Last edited: 18 Dec 2018
  6. Mister_Tad

    Mister_Tad Super Moderator Super Moderator

    Joined:
    27 Dec 2002
    Posts:
    12,262
    Likes Received:
    748
    I was on the fence as to whether I should split guests out like that - the principle is largely the same you might do it on FC - separate tiers, separate I/O queues, snapshot policies etc.

    I'm not sure any of those things are going to be of appreciable benefit in my case though - it's all sitting on the same disks after all, and in terms of apps it's nothing particularly I/O intensive in the traditional sense.
     
  7. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,029
    Likes Received:
    481
    Yup, and as you're using Hyper-V you can use either dynamic allocation or you can shrink/grow your volumes using the built in tools anyway.
     
  8. Mister_Tad

    Mister_Tad Super Moderator Super Moderator

    Joined:
    27 Dec 2002
    Posts:
    12,262
    Likes Received:
    748
    So.... please disregard all that stuff I said about Hyper-V.

    I'll be using full fat vSphere 6, and naturally NFS datastores.

    For boot luns, should I still bother with iSCSI, or just throw it on a USB/SD?
     
  9. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,029
    Likes Received:
    481
  10. Mister_Tad

    Mister_Tad Super Moderator Super Moderator

    Joined:
    27 Dec 2002
    Posts:
    12,262
    Likes Received:
    748
    Yeah... makes sense I suppose. I kind of wanted to use iSCSI just because it's there and I can.

    I have a 16GB stick that I never use will fit nicely into the USB port on the board, so will use that. Those SSDs look terrifying!
     
  11. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,029
    Likes Received:
    481
    The tcsunbow ones are supposed to be not too bad. But as you’re doing ESX now it’s overkill for storage for sure. Just make sure you have a backup off box as usb drives are not usually write intensive eg local logs etc can kill a USB. Don’t get me wrong I had openmediavault on a USB stick for two years without any issues but I had a friend whose USB stick died in 9 months in the same circumstances.
     
  12. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    12,811
    Likes Received:
    2,035
    I did this. Shoved /var onto a 4GB Kingston USB on a server as an experiment, so that the spinning-rust boot drive could spin down and save power. Lasted... Four months? Six? I forget.
     
  13. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,029
    Likes Received:
    481
    With openmediavault, there's a specific plugin you can install which creates a ramdisk and mounts the drives to that, flushing it back to USB periodically. Massively reduces the writes. I actually didn't install that and I'm still surprised the USB stick held up. When I replaced/rebuilt my server I did move to a normal SSD however. I figured I'd chanced it enough.
     
  14. Mister_Tad

    Mister_Tad Super Moderator Super Moderator

    Joined:
    27 Dec 2002
    Posts:
    12,262
    Likes Received:
    748
    I'd move logging to the NAS and ESXi runs basically entirely in memory anyway, so the stick should be fine. In most of my experience with VMware in production, if ESXi wasn't booted from the SAN, it was sitting on an SD card in the server. That doesn't mean I won't have a backup plan of course.

    That said, as I was getting things set up last night (backing up and cannibalising old server for CPU/mem, faffing with resetting IPMI on the new one that was inevitably not done before it was sent to me), it was taking longer than I thought it would take so didn't get to the point of actually setting anything up. I started to think that maybe I'd actually prefer Hyper-V... at least for now.

    Wait, what? Hear me out...

    All of the things that tend to make Hyper-V a pain IME generally focus around management at scale and System Center, neither of which are going to trouble me.
    The thing that makes vSphere great (again IME) in comparison is vCenter... alas I'll never have more than a few hosts and a dozen or so VMs, and vCenter is a major resource hog... at the moment there's only 16GB in the server to that rules out the VCSA pretty much entirely... I mean if I want to actually host any real VMs. I have a spare i5/8GB laptop I can throw the windows server onto which is what I was planning, until I've taken this step back and really considered what the better approach is.

    I think because I already had the w2016 NFR licenses available I thought I wanted to use vSphere more than I actually did... that old "I can't have it so I want it", and then a stack of vSphere NFR licenses landed on my lap and I didn't fully consider if it was the right thing to do.

    I'm not entirely sure what to do now, still on the fence. Of course I've torn apart the old server now so I'll have to figure it out sooner rather than later. I might mull on it until the weekend.
     
  15. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,029
    Likes Received:
    481
    Since you already have the Server 2016 licenses, you can set up a cluster, as long as you create enough virtual NICs and subnets for the various fafferies, and that means you'll be able to live migrate between the hosts, and manage from a single Hyper-V console.

    With two separate VMware hosts, you'll have two standalone hosts with no vCenter due to your current RAM shortage.

    If you go with Windows on the tin, a cheap, small SSD looks appropriate again.

    This Kingston 120GB is just £19.98 thus avoiding the cheapo brands https://www.amazon.co.uk/Kingston-Technology-SA400S37-120G-Solid/dp/B01N6JQS8C

    I'd recommend at least 120GB because you're going to want to nail your virtual memory to double system RAM in a Hyper-V host. Ordinarily I'd build a 60GB C: partition, a VMEM partition, and a data partition. Seems a little overkill for you though, just have the full disk available. But bear in mind if you're going to 32GB RAM in the future you'll want a 64GB pagefile so you might even want 240GB, in which case this Crucial unit is probably the best deal https://www.amazon.co.uk/Crucial-BX500-CT240BX500SSD1-Internal-NAND/dp/B07G3KRZBX/
     
    Last edited: 19 Dec 2018
  16. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,029
    Likes Received:
    481
    In case it's relevant, I did work on a 5 year project to design and implement a 6 POP design using Hyper-V and iSCSI NAS with Windows Server 2016 clusters, across a varying number of double-digit hosts.

    Sure there's other ways of doing it but I know from experience that this way worked.
     
  17. Mister_Tad

    Mister_Tad Super Moderator Super Moderator

    Joined:
    27 Dec 2002
    Posts:
    12,262
    Likes Received:
    748
    I'd have vCenter, it would just mean re-purposing a laptop that's not being used anyway (actually, re-purpose a 4GB laptop that's not being used for things that an 8GB laptop is doing, and re-purposing that to vCetner), to run the windows server version. Which would be fine I guess, but kind of grinds me gears at the same time.
     
  18. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,029
    Likes Received:
    481
    Usually it makes sense to have one baremetal domain controller somewhere in your environment, and if that can be your vCenter server all the better I guess?

    What it comes down to then is how much you want to learn VMware infrastructure over using Hyper-V and Windows Cluster.
     
  19. Mister_Tad

    Mister_Tad Super Moderator Super Moderator

    Joined:
    27 Dec 2002
    Posts:
    12,262
    Likes Received:
    748
    I was thinking going if for the Windows option, I'd still iSCSI boot... gives me all the space I can shake a stick at, likely greater performance than a local SSD, storage redundancy and a simple approach to faff-free storage-side backup... and it's "free".

    Perhaps I'm leaning this way because I've not yet had the misfortune of setting up iSCSI boot though? I know some people are very against SAN boot, but I've never had a problem with it in the FC world.

    Really, I don't really need to learn either... I'm unlikely to ever use any hands-on knowledge at this level of the stack from a professional sense on any platform, and from a personal development point of view, I've set up enough of my own sandbox labs for the hell of it only to stand back and think "right... what now", and tear it all down again and wonder why I spent so much time messing about when I've got access to more labs than I can count at work.

    So this is more "Home Prod" than "Home Lab" - I just want a more scalable way to add and manage services at home - it will be a learning experience either way to get things up and running and that's good, but I very much want to limit involuntary learning experiences months down the line when something isn't behaving like it should be.

    It could be useful to be handier with powershell, so perhaps that's a tip to Hyper-V, but at the same time more of the things that I work with are friendlier to the vSphere API, so if I were to ever want to carry any of that across to home that would be useful as well. Bleh.
     
  20. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,029
    Likes Received:
    481
    Okay. If you want some kind of HA between your servers you should use VMware. Windows Cluster requires same-generation CPU on all hosts in the same cluster. VMware, AFAIK, will be far more tolerant.
     

Share This Page