Hi. Thanks for that. That's really cool. I'm 18th in the top 20 producers chart and I've only just started. Just shows what a couple of decent graphics cards can produce. It will be interesting to work out how much electricity they are using and hence the cost. I'm not quite sure that I really want to know the answer. On the plus side my study is now the warmest room in the house. AAAHHHH. Feel the heat! Pete
i have 5 cards folding and use 30ish a week on electricity with other household use of a family of 4. i did swap out a 560ti and 2 460s for a 660ti a couple of weeks ago and from 50ppd to 70ppd ish and 320 odd watts of power less a used is saving me some money on electric i was using 35 to 40 on electric. before
Get a power meter from Amazon like this: http://www.amazon.co.uk/Energenie-E...F8&qid=1384851242&sr=8-1&keywords=power+meter Run it on each system for a while, see how many kW is being drawn and then work out your electricity cost per kW and hey presto.
Another quick question. I understand that with Nvidia cards it is best to allocate one of your CPU cores to each of your GPU's to enable it to perform at its best output. Is the same true for AMD GPU's? I've read elsewhere that it is only for Nvidia cards and not for AMD ones. Perhaps someone can shed some light on this for me, as I'm now folding on AMD cards. Many thanks. Pete
Only relevant for Nvidia cards. AMD run fine on their own. HFM is a monitor, it basically watches all cards/clients you have running and displays what they are doing and how far they are through units. If you are using V7 of the FAH client, it isn't really needed. If you have multiple systems folding it is handy, especially if they are headless machines. https://code.google.com/p/hfm-net/ *edit* said 6, meant 7
I thought it was a requirement for AMD cards that a thread was made available for each card. I am sure v7 client autosetup it removed a thread for each AMD GPU I was running. In the end I just removed the CPU folding, but that was when I had 3 GPU in tri x-fire. Sadly my computer could not keep them all cool running 24/7 and I removed one. Motherboard temp warnings were poping up on a regular basis. And now I find out they arent reference so no full water block for me.
I did a bit of experimenting and took the CPU cores back up to 8 (4770K) and let it do it's thing. The PPD estimate has risen slightly and there doesn't seem to be any adverse effect on the 4 GPU folding cores, which have all risen slightly too. I'll keep an eye on things to see what happens in the longer term.
Another question I'm afraid. I have various old computers lying around which would be very slow for CPU folding but are modern enough to have twin PCI-E x16 slots for SLi graphics cards. Now they would probably only be PCI-E version 1 slots so wouldn't have the memory bandwidth of current slots, but would these old motherboards and CPUs make a suitable base for running a couple of modern graphics cards in, solely for folding on? If that was possible without too much of a detrimental effect on the GPU performance then it would save me a whole load of dosh, as I'd only need to acquire the GPU's off eBay and not a whole machine. Once again your knowledge and experience would be much appreciated. Cheers guys. Pete
hi pete aslong as they have a core each for nvidia yes there fine i have had 2 460gtx running on a e5200 duel core also a decent psu
Folding doesn't require much bandwidth through the mobo at all. You can run folding cards in a X1 PCI-E slot by either getting a 1x-16x riser cable or by doing what I did, taking a knife to the motherboard and cutting out the end of the slot so the card fits in. Bandwidth is almost irrelevant with folding.
So assuming that you do not bother with SLi or Crossfire arrangement, is the limit of GPU's that can fold on a single motherboard, the number of PCI-E slots that you have on the motherboard? Is this totally separate to the SLI/Crossfire limit that the board can handle?
You can start to run into OS limitations past 4 cards, Windows isn't too happy sometimes, but it does work.
I don't expect that it is just Windows that is going to be a limiting factor! Power supply possibly and fitting more than 4 cards on any particular motherboard. Even the EVGA SR-2 seems to be able to only accommodate 4 GPU's and that's massive with 7 PCI-E slots. Most of my old Mobo's are Nvidia SLi compatible, but AMD GPUs seem to be cheaper cards with higher PPD than their Nvidia counterparts. Will the AMD cards work in the Nvidia Mobo? If I can re-use all my old kit with modern GPU's then I could get some other machines up and running without spending the earth.
yes you can run nvidia in crossfire slots and ati in sli slots but you cant enable sli in crossfire or crossfire in sli slots they will be fine as multiple cards in single slots if you can understand what i mean
That sounds like a bargain! I might have just found a use for all those ASUS A8N-SLi boards that I have stashed away with AMD X2 processors. The CPU's are too slow for anything much now in terms of heavy duty processing power but if they can keep some graphics cards happy, then that's a result. CPU folding doesn't seem to make many PPD anyway. My 4770K @4.3Ghz is doing less than 20K per day. Almost a waste of time bothering.
GPU folding with core17 has made it an open day for all 'budget' folders. As unless you can afford to buy a system with more than 16 cores, you cannot do bigbeta and get the good points doing CPU folding. I used to use my 2600k, overclocked to 5Ghz and used linux to trick the system into thinking it had 12 cores so I could do big beta, was getting 70-80k PPD on a £200 processor. That was a good few months