1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Windows Various ESXi questions including running an Active Directory Domain Controller

Discussion in 'Software' started by Chicken76, 7 Oct 2012.

  1. Chicken76

    Chicken76 Active Member

    Joined:
    10 Nov 2009
    Posts:
    952
    Likes Received:
    32
    Last edited: 7 Oct 2012
  2. faugusztin

    faugusztin I *am* the guy with two left hands

    Joined:
    11 Aug 2008
    Posts:
    6,931
    Likes Received:
    261
    This has nothing to do with ESXi/VMware Hypervisor, but rather with the fact how time is measured in Windows. So yes, you will probably have to do it.
     
  3. Chicken76

    Chicken76 Active Member

    Joined:
    10 Nov 2009
    Posts:
    952
    Likes Received:
    32
    Well, I was thinking Vmware has a lot of customers running Windows Server in their Vsphere and might have automated this, say through Vmware Tools.

    Does anybody run a Domain Controller virtualized in ESXi?
     
  4. Fanatic

    Fanatic Monimidder

    Joined:
    4 Jun 2010
    Posts:
    851
    Likes Received:
    49
    GeorgeStorm likes this.
  5. Fanatic

    Fanatic Monimidder

    Joined:
    4 Jun 2010
    Posts:
    851
    Likes Received:
    49
    Actually just remoted in to check the config on my test bed and I have synced the DC VM and the ESXi server to their own NTP servers independently of each other.
     
    Chicken76 likes this.
  6. scott_chegg

    scott_chegg Active Member

    Joined:
    16 Feb 2010
    Posts:
    952
    Likes Received:
    83
    this is right. AD is very time sensitive and time in a VM is not constant. Sync with NTP and not VMware tools. We run all our DC's virtual and don't have any time sync problems.
     
    Chicken76 likes this.
  7. Chicken76

    Chicken76 Active Member

    Joined:
    10 Nov 2009
    Posts:
    952
    Likes Received:
    32
    Thanks guys. Have some rep.

    What NTP servers do you use/recommend?

    Edit: Another question:
    How much memory do you usually reserve for the hypervisor and all its related work?
    Say you have a 16GB machine, would running VMs totaling 14GB without host-based swapping be safe?
     
    Last edited: 7 Oct 2012
  8. Fanatic

    Fanatic Monimidder

    Joined:
    4 Jun 2010
    Posts:
    851
    Likes Received:
    49
    I have always used http://www.pool.ntp.org/zone/uk for my NTP needs across all my server installations.

    The ram question is probably better answered by others with more VMWare experience but as I understand it although the hypervisor will require some ram usage, it will dynamically acquire what it needs.
     
  9. Fanatic

    Fanatic Monimidder

    Joined:
    4 Jun 2010
    Posts:
    851
    Likes Received:
    49
  10. CraigWatson

    CraigWatson Level Chuck Norris

    Joined:
    9 Apr 2009
    Posts:
    721
    Likes Received:
    33
    ESXi is normally rock-solid and its footprint is next to nothing, so you can safely allocate all of the 16GB of host memory - or even overcommit and allocate more if you want more capacity. With VMware Tools installed on your guests, ESXi will deduplicate in-use RAM, so that if any data is in guest RAM twice (say, two copies of the same OS) it will be in RAM once.

    Obviously de-duplication only goes so far, and over-committing is generally considered to be a bad thing - if all of your VMs have a memory leak you'll be in trouble.

    Edit: Also, make sure your ESXi host is also synchronising to your NTP servers, as ESXi uses it's own system time to generate the BIOS time of your guests. From the vSphere Client: Configuration -> Time Configuration (under the 'Software' panel).
     
    Chicken76 likes this.
  11. Margo Baggins

    Margo Baggins I'm good at Soldering Super Moderator

    Joined:
    28 May 2010
    Posts:
    5,607
    Likes Received:
    242
    I quite often over provision ram on my test lab with no problem, though I don't think I would over provision a production server. But I have production servers which are provisioned to 100% of the ram in the system.
     
    Chicken76 likes this.
  12. Chicken76

    Chicken76 Active Member

    Joined:
    10 Nov 2009
    Posts:
    952
    Likes Received:
    32
    Thanks guys,

    Since you seem to know a lot more than me about ESXi, I'm going to try and abuse your goodwill with a few more questions:

    Is it generally a bad idea to have multiple VMs with the number of cores equal to the total number of cores available on the host? Are there bad scheduling issues? Does performance suffer a lot? I've read some opinions that it does, but wanted to hear from someone with some hands-on experience in production environments. I'm not going to have VMs do CPU-intensive work at the same time, but occasional races for CPU time may occur. Am I going to have unresponsive VMs?

    Have you used multiple NICs linked together with ESXi, and if so, have you used LACP or something else?
     
  13. CraigWatson

    CraigWatson Level Chuck Norris

    Joined:
    9 Apr 2009
    Posts:
    721
    Likes Received:
    33
    Questions are good, no worries :)

    CPU over-commit is fine, and if anything it's almost a given - the only CPU constraint that you should really care about is the MHz in use by the guests - ESXi will automatically manage the load across CPUs.

    The rule you may have read is to keep to one vCPU per VM - this is potentially to avoid over-commit and contention, but also for VM performance reasons. If you enable fault-tolerance over multiple ESXi hosts, you'll be limited to one vCPU because ESXi will have to contend with keeping both instances of the VM up-to-date across a 1Gb LAN (FT basically keeps a copy of the VM running on a different hosts to get seamless fault tolerance).

    In short: one vCPU per VM is adequate unless you have specific requirements for CPU usage :)
     
  14. Chicken76

    Chicken76 Active Member

    Joined:
    10 Nov 2009
    Posts:
    952
    Likes Received:
    32
    Yes, one vCPU is what I intend to allocate to VMs. My question was about cores. Say I use a host with one quad-core CPU. Is it advisable to create VMs with one quad-core vCPU? The recommendations were to use single-core vCPUs unless required otherwise. My intent is to do some scripted backups with 7zip, which scales well with the number of cores. For that I shall need to stop that particular software, do the backup, and start it up again, all within the script. The less time it's down, the better. So having a VM with a quad vCPU is going to speed things up considerably. But, I'm worried about having other VMs with quad vCPUs running and ending up with unresponsive VMs.
     
  15. saspro

    saspro IT monkey

    Joined:
    23 Apr 2009
    Posts:
    9,388
    Likes Received:
    263
    Providing you're not spanking the VM's then core sharing is fine. Just remember you're giving ESXi more work to do keeping track of things by allocating more cores than needed to a VM.
    Dual core is fine for most things.

    Each core shows up as a vcpu in ESXi so you have to balance multiple vcpu's vs multicore vcpu's for licensing reasons (for example you can't give Server 2008 Std 8 vcpu's as it can only use 4 but you can give it a single 8 core vcpu (or 2 quads)
     
  16. CraigWatson

    CraigWatson Level Chuck Norris

    Joined:
    9 Apr 2009
    Posts:
    721
    Likes Received:
    33
    You should be able to get things running with no problems if you over-commit on CPU - as saspro said you won't see any degradation unless you seriously load them.

    Alternatively, you might want to look at Veeam for backups of VMs - I've used it professionally and it's an amazing bit of kit - the free version is pretty decent too :)
     
  17. Chicken76

    Chicken76 Active Member

    Joined:
    10 Nov 2009
    Posts:
    952
    Likes Received:
    32
    Thanks guys, I really appreciate it.
    Another question:
    Would you guys say paying for the cheapest Essentials Kit is worth it over just using the free ESXi?
    Here's a comparison table.
     
  18. CraigWatson

    CraigWatson Level Chuck Norris

    Joined:
    9 Apr 2009
    Posts:
    721
    Likes Received:
    33
    VMware's pricing is insane, unless you want to use any of the clustering features it's not worth it. The free vSphere Hypervisor is more than adequate for most people's needs.

    What do you want to use ESXi for?
     
  19. Chicken76

    Chicken76 Active Member

    Joined:
    10 Nov 2009
    Posts:
    952
    Likes Received:
    32
    The most entry level kit is only about $500 as far as I can tell. Even for a (smallish) business it's not that much.
    I'm going to run a Windows Server and a Linux server on it. And I'd like to use the same hardware, and have the possibility of migrating to new hardware as the need arises without OS reinstallation.
     
  20. CraigWatson

    CraigWatson Level Chuck Norris

    Joined:
    9 Apr 2009
    Posts:
    721
    Likes Received:
    33
    I would say with almost cast-iron certainty that you won't need the paid version. You can very easily migrate the VMs from new to old hardware, even with the free version - the VM disks themselves are just files on a filesystem (ESXi uses a filesystem called VMFS where Windows uses NTFS) so you can very easily move both the VMDK disk files and the VMX (VM definition) files between two servers and just import the VM to the new server.vYou can also use VMware Converter (free app) to virtualise existing physical machines if you need to.

    The VMware vMotion features are normally used for concurrent server instances where you want to cluster and/or load balance VMs between multiple live servers, rather than old and new hardware. Ultimately the lower-end paid VMware versions are almost a waste of money, VMware likes to create the illusion that you need to pay for useless features that you're never going to use :)

    A case in point: I have a dedi server with Hetzner running free ESXi 5, with seven VMs (one Windows plus five Ubuntu and an IPFire router) on 16GB RAM and a quad-core CPU with HT. I've allocated 17GB of to VMs and only 14GB is actually in use (this is with a 4GB reservation for a client's VM).
     

Share This Page