I've done some googling but I haven't fully got my head around PC over IP. From what I gather it seems to offer better results than RDP From what I've put together so far you can have a PCI-e card which sits on the Host server, which seems to handle the PC over IP allowing the clients to connect to the Virtual machines on the host. I'm not even a 100% sure this much is right. Do the clients have to be thin/zero clients or can the VMs be accessed via a software client running on a windows machine via PCoIP Do you need to have VT-d enabled hardware to make use of the PCoIP card on the host server or any other virtualisation technology. How well does all this fit into the VMWare ecosphere. Do I need to million dollar licenses to run a PCoIP setup or will it work off of ESXI.
We looked at some of these solutions from evga/teradici at my old job to replace some aging Wyse gear. The suits opted to go with other plan of nettops with Windows Thin instead which may also be worth looking into. So you need a servers with exsi with enough muscle and diskspace as you would for hosting VM in any other situation, pcoip host cards, offload cards, gige network infrastructure, zero clients. vt-d is not absolutely necessary the way it was explained as much of the work is done by the host and offload cards anyway. Cost wise there are in the same realm as other competing thin client type solutions, and generally much cheaper than full fat workstations. Performance wise they are a hell of a lot closer to the feeling of a current full fat workstation than the aging wyse stuff (via c3 based 500mhz, 256-512ddr, 10/100) we were comparing it against. It is a lot more like being on their image rather than connecting to it if you know what I mean.
How many hosts & clients are we talking about? How old are your existing PC's? Do you have any existing RDS services in operation? What's the end goal?
This is for a home setup guys. Perhaps a bit overkill, but I would still like to run the setup out of a curiosity/hobby sort of way rather than because it is essential or work related. I was thinking of recycling my 1366 setup into an esxi server, so I could run various linux distros, perhaps a couple of windows O/S and have a look at the apple O/S as well. Just a virtualised play ground for myself. There is no vt-d on my cpu unfortunately. I had a look at the PCI-express cards they seem to be a few hundred pounds so not really crazy expensive. Which makes it feasible to do for myself. All I would need is a few more sticks of ram and maybe some network cards depending on how my mobo turns out when it comes to esxi compatibility. Ivan you mentioned offload cards, could you explain what part these play. Or could you elaborate a little on how the actual infrastructure is setup. (I've just watched a video on the evga website, it looks like a server off load card would be super overkill) My thoughts at the moment are that I could set up an esxi server, throw in a PCoIP host card and hopefully use a thin/zero client or free client software to access the VMs. Am I heading in the right direction or am I missing something. I'm sorry if the questions are a bit simple but googling really isn't turning up much for me. This type of setup seems to be more on the inside of corporate IT rather than information that is freely out there.
I wouldn't even consider it for less than 30 users. Especially for single user the performance benefits would be nearly non existent for quite a large cost. Sure it would be fun to tinker with but in your situation I would definitely just go for something like a basic nettop or even old cheap Athlon systems or whatever you can get for cheap and remote in with RDP to your vms on the server. If you just want to fiddle around with things for fun might as well do it on the cheap.
But couldn't I do things such as look at you tube videos or other media which I wouldn't be able to do using RDP (I try to avoid vnc where possible) I believe I saw a dude playing counter strike over PCoIP I thought I could use a virtualised linux for my 90% of the time O/S so it would be nice to have a close to real metal experience as possible
you could use something like multipoint server, with zero clients, although it isn't recommended for less than 5-10 peeps if you need office on it.
I believe multipoint server would be more multiple instances of the one O/S. I'm more interested in having a few of each. (linux windows mac etc) Edit: I've been looking into this a bit more and if I was going to do this I believe I would need vmware view licenses, server 2008 licenses and just a bunch of crap that is way out of range on what I would spend for a home lab. Which is a pitty.