Just upgraded my cpu from an athlon ii 250 to an X6 I have 3 gts450's in the same machine, Ive therefore set the smp client to -smp 4 (assuming that 2 gpus can share a cpu core). Is this the best setting for my configuration or would -smp 5 be just as good? What would be the best way to tell if another configuration is better? It's my understanding that programs like HFM (which I use) work on estimates/guessing on your ppd based on the last few frames. With this in mind, do I just have to wait it out and watch my ppd to find out the best setting?
I would set it to normal SMP (all 6) And then experiment setting the GPU's to low priority in config and the SMP to idle (and Vice Versa), to see which pays off best. I'd guess you'd want the GPU's above and then the SMP just eating whatever CPU time it has left. Out of interest which chip did you go for?
Cheers for the replies guys I went for a 1090t BE as I havent the time to OC at the moment, otherwise would have gone for 1055t and OC'd.
I run my X6 on simple SMP (no number) which I assume picks up all six cores, since I can see them all working away in Task Manager. I also run 4 GTX260s with the GPU3 client, configured as 'slightly higher'. No complaints about the PPD but I have not rigorously experimented. HFM works by looking at the times in your log file for the last three frames (unless you have set the preferences differently). It just averages the time taken for those frames, and calculates the PPD assuming that the same Project would be continuously downloaded and run to completion. So if your machine really does download the same Project (which is not unusual) the predicted PPD will be what you actually score. If you change the configuration of the GPU client, the changes only kick in when a new WU starts. So keep track of your predicted PPD and Project number, and then see how that changes if you change the configuration.
That's the CPU I've got and it scores 4-6,000 PPD depending on project, whilst simultaneously feeding four GPU3 clients - which are said to need about 1/2 a core each. I'm expecting the GPU3 client to gradually use less CPU time as it is updated and matures, in the same way that GPU2 did. I'm guessing that there is a 'theoretical possibility' that the different GPU projects running will affect the SMP output to a small extent. I'm not recording any data for this.