Discussion in 'Software' started by Chicken76, 7 Oct 2012.
It can do it whilst it's running, but it has to be started manually.
How does ESXi handle CPUs with Turbo Boost and SpeedStep? Is it handling the CPU multiplier itself? Otherwise how can it allocate MHz to VMs if it doesn't know what it can count on from the hardware?
^ Cant recommend this enough.
Had lots of issues with Veeam and my windows boxes on some ESX4.1 hosts. ESX hosts were on some weird time, the snapshot process started by Veeam would change the time on the guest to the ESX host time which was way different to the actual windows time. As you can imagine AD based stuff went a little crazie as did exchange.
In the end I have setup both the ESX hosts, guests and another other hardware we have that I can think of to get time from one of the ntp pool project servers. All working well for now!
You're not going to use turbo boost or speedstep.
You usually set share allocations to let it know what gets what.
Should I disable Turbo Boost and SpeedStep, or is ESXi capable of handling them correctly?
ESXi should be aware of them.
However there's little point having speedstep on as the whole purpose of virtualisation is to maximise processor useage so you shouldn't drop down to a level where speedstep can kick in.
Even at night, after all the backup/maintenance jobs have executed?
Emails still flow etc.
We've got 30-40 VM's per host in our setups so there's always something doing something
Not in my setup. There will be 2 VMs to start with and it won't grow beyond 4.
How many Vcores are there in total for how many actual physical cores? I'm trying to see how much overcommitment it's usually considered OK for production environments.
Even so, two idling VMs will create enough CPU traffic that SpeedStep isn't really worthwhile. Feel free to leave it turned on, as saspro said vSphere is aware of it and capable of using it.
Here's my allocation for my dedicated server:
As you can see I've allocated 21 cores to VMs, despite only having four available (well, eight if you include the fake HT 'cores'). CPU is allocated using MHz, not cores so you can over-provision as much as you like there - I'm running on a 525% overcommit (262% if you use the HT cores as real cores) and I could probably double that very easily - assuming I have enough RAM, disk and network resources to match.
I've also allocated 15104MB of my 16291MB of RAM to the VMs, but the is only 9675 in use thanks to vSphere's deduplication and dynamic allocation policies. That said, a RAM spike is a lot more likely than a CPU spike so you can typically over-commit by around 10-15% before you start getting into trouble.
Are those VMs predominantly Windows or Linux?
I also wonder how well a not supported linux will run (a distribution on which VMware Tools won't install). I guess the RAM overcommitment (and probably the CPU to a lesser degree) is only possible through the Tools.
Every VM bar one is Linux running Ubuntu Server 12.04 LTS - I also host a VM for a client which runs CentOS 6.3.
I've recently written a Puppet module to handle VMware Tools installations and upgrades on Linux guests, and if the installer doesn't have the pre-built modules to insert directly into the kernel, it'll build them from source (assuming you have the kernel headers and build tools installed as well).
Well, I guess Ubuntu Server is supported by VMware Tools. I wonder if version 12 is also supported or is it too recent?
On non-supported distributions, I guess installing/compiling them shouldn't be too complicated. Besides, I won't really need everything the Tools contains, and if I remember correctly from an older version, you could choose what to install and what to leave out.
Ubuntu (all versions) is done via compiling the drivers from source. However this is all done via the VMware Tools Perl install script so you don't need to do anything at all, just make sure that the build-essential metapackage is installed, untar the VMware Tools tarball and run the installer, it'll detect if there are any prebuilt modules that match your kernel and compile them if not
At work we run a multitude of distros (SUSE and SLES, CentOS, RedHat Enterprise, Scientific, Ubuntu and a few Mandriva I believe) and all of them have VMware Tools installed
At the beginning I'll have a Win Srv 2012 and an Ubuntu Server 12, but my mostly used distribution is Slackware. I'll probably manage to install the important components on it.
OK, slightly different direction now:
What types of virtual NIC do you guys use in Win/linux VMs? E1000, VMXNET3 or something else? What's the recommended for each OS?
And same questions, but this time for storage adapters. LSI, Buslogic or Paravirtual?
VMXNET3 is generally the way to go - same for Paravirtual storage - mainly due to the performance increase you get (VMXNET3 gives you a 10Gb link between VMs on the same vSwitch IIRC).
That said, I've only just moved all of my VMs to VMXNET3 and Paravirtual SCSI (previously they were on E1000/LSI) and I've never really had any major performance complaints so YMMV
Have you seen any improvements in performance after the switch?
VMware Knowledge Base is your friend.
Good info, but there are very little recommendations, more compatibility lists.
For example, E1000e they say is the default in Win8. Am I supposed to understand that it is also the recommended one for this OS as it will have the best performance, or is it simply the default because the Win8 installation kit has a driver for it so it will work "out-of-the-box" and with good enough performance?
Should I apply the Win8 recommendations to Win Srv 2012 too?
The E1000 is recommended I believe because it offers the most 'out of the box' compatibility - it emulated an Intel card which is almost universally supported so it's often the best choice. As for Server 2012, I'd say the E1000 was a decent choice, at least until VMware get their VMXNET3 drivers for the new platform sorted.
Separate names with a comma.