If you want the best performance from virtualization, you might want to consider container-based virtualization rather than hypervisor-based. Basically container-based VMs are containers for the hardware, so instead of emulating the hardware on the VM, you simply assign part of the physical hardware to it. This means much less overhead (1-3% over bare metal) and generally greater performance. I do not know about VT-d technology support, but by judging the information I have seen on OpenVZ, this seems to support it. The only problem with container-based virtualization is that all VMs must share the same OS as the host system. So if the host is Linux, then all VMs must be Linux as they all share the same kernel to increase performance. OpenVZ runs off a Linux kernel.
No probs, glad to help Any reason why you'd need specific PCI passthrough? Your post suggested NIC passthrough, which you can do via the software/hypervisor, so this isn't strictly "PCI" passthrough - networking is considered to be part of the software. You can of course limit your NICs to one VM each by configuring them that way, the inverse of what I was saying about multiple VMs on a NIC/vSwitch applies. I stand corrected on VT-x/d, will take a look at the differences - would be very interested in the ASUS findings, as I've been an ASUS fan for years (my current desktop runs a P5Q-Pro and Q6600 @ 3.7GHz). Not sure what you mean there, I've been using VB since v2.x and I've never had an issue with drivers - there's been the odd stability issue but nothing major. If you're virtualising Snow Leopard, you're breaking Apple's EULA, which specifically states that the only version of OS X you're allowed to virtualise is OS X Server, and this is only on Apple Hardware. I believe Apple have recently opened this (especially with Lion) which is why VirtualBox has supported emulating an EFI BIOS for a while and also officially supports OS X guests (though as above, only OS X Server).
For example being able to use a NIC that's not supported by VMware. I spent many hours trying to figure out why ESXi install throws up on my system without any intelligible error messages. As it turns out, there were no supported NICs in my system. So I had to yank a PCI-X card from a server, tape the unused contacts and plug it in my computer before the installation could continue. Now if I could use PCI passthrough to give a virtual machine exclusive access to the onboard NIC, I could install the driver for it in the guest OS as if it would be running directly on the machine. Another reason to make it work would be FREEDOM! Imagine using one USB controller in VM1 and another in VM2 (since most boards nowadays have 2 of them). It would make USB-connected device assignment redundant (every time you plug in say ... a memory stick, you have to go configure the hypervisor). But think about all the myriad of devices that could be used natively by the virtual machines! SCSI and SATA controllers being "plugged" directly into a VM Here's a collection of information sources I have gathered on VT-d. If you find other relevant ones, please share them here. All of these are referring to Intel hardware only. AMD's equivalent seems to be called IOMMU, but I have not ventured into the green side yet. Intel article on VT-d. Pretty technical and boring. Link Shorter and easier to read/understand blog piece on VT-d by an Intel engineer. Also, contains a link to another Intel guide on how to perform Direct Device Assignment in ESXi. Link Interesting Intel roadmap on virtualization. Link KVM's howto on the subject, with a link to XEN wiki page containing a list of supported hardware. As you can see, it's pretty short, and Asus bioses have mostly not working support. Link Now here stats the list of threads from different forums that contain useful information. Link1 Read this thread, especially page 2. You can see how the community is stumbling in the dark in this area, slowly gathering (empirical) data on the subject with very little official input from the industry. Link2 Very interesting "rant" Link3 Thread about Intel motherboards with VT-d support Link4 Build log for a storage server that would run ESXi and use VMDirectPath (VMware's name for it). Lots of hardware pr0n pics Link5
you could also check out Citrix Xen Server as it is available with a free license. I haven't really played to much with it but what i can gather is the free version basically doesn't allow the following features. HA Dynamic Resource Allocation Over Comit Memory