Discussion in 'Article Discussion' started by Sifter3000, 5 Aug 2010.
you bought a 480 when you had the 5970?
If a GTX 460 is so good at Folding @ Home then why are my clothes still on the floor?
yep, and kept the 480 and sold the 5970
Sooo... you basically paid twice the money in the long run (for both cards, I mean) for a performance downgrade? Oooookey dokey.
Have done the same recently and I'm glad I did.
Sure, a performance downgrade in gaming, but not everyone buys GPUs purely for gaming.
I have to say, one thing that worries me about this article is the testing methodology. Were like for like projects used for example, it doesn't really go into any detail on how PPD was calculated. What tool was used, if bonus points were added.
I've also noticed that in the drivers part it states:
# Nvidia GeForce GTX 260 896MB
# Nvidia GeForce GTX 260 (rev 2) 896MB
But only one GTX 260 result is given. Considering the 216 shader version folds better than the 192 shader version for obvious reasons, this seems very odd.
+1 - What is going on here?
Considering the whole article is on Folding and therefore folding performance, then the statement "it's actually more power efficient than the next fastest card, the GeForce GTX 480" is wrong. The GTX 295s performance per watt is lower, meaning it is not as efficient. Using your own figures, it is very clear that the GTX 480 is the more power efficient folder. You are getting 46.2 points per watt with the GTX 480, where as the GTX 295 gets only 43.1.
I first want to say thanks for an epic effort to test so many cards.
But, along those lines, one card was sorely missed, that being the 240GT. It has been shown to be very efficient. A GTS250 would have been nice too.
Additionally, I believe your chart for Folding power efficiency is flawed. Since the PPD is divided by the TOTAL system power, the numbers are skewed. The wattage used should be determined by the difference between the system at idle and the system Folding. Then that should be used to determine PPD/watt.For instance, the 9600GSO is listed at 20.3points/watt. Thing is, the peak draw of the GSO is 72 watts, so 4181/72=58.1Points/watt. The system overhead hurts the lower PPD cards. Take the GTX480. Its peak draw is 257 watts, so its points/watt is 13892/257=54.1 points/watt. Oh, and I didn't even mention that we're dividing points per DAY by watts per HOUR.
How do you come to that conclusion?
Did you read page 2? All that info is listed there. A for bonus points - there's no such on any of the GPU clients - only the SMP CPU clients.
I've also noticed that in the drivers part it states:
Umm - once again, did you read the article? Both types of GTX 260 are listed in all the appropriate graphs.
You are correct with regards to the GTX 260s, it was late and I must of missed that, again with the bonus points, my bad.
It still doesn't state if an effort was made to ensure that similar work units were used, I know you can't really pick WU but the WU type and size is key to the PPD. If this was done over 24 hours for each card, I can see someone glancing at the PPD listed at the end of the 24 hours and using that, when that only applies to the WU it is on. Was the benchmark viewer in HFM used and an average taken from the ratings over all the projects it completed within that 24 hours?
What about my first post? The closing statement is clearly flawed.
The testing was carried out with the same WU being processed by each GPU of the same family. i.e. all the ATI cards processsed the same WU, all the GPU3 cards did another and all the GT 200-series and 9600 GSO a third WU. In short, the results are as consistent as you can get - although the ppd very will vary between WUs from different project families, but as you said yourself, you can't pick and choose WUs).
Sorry, missed that one. To be honest it was either a typo or an editing error, so thanks for pointing it out. I've corrected it now.
Ok that is a good way of doing it, maybe mention that on the how we tested page. Quoting the WU project numbers used would also aid people in compairing their cards, although you may not have that data to hand.
The closing section looks much better now, thanks for listening.
Would be nice if you could add a GT240 as well, widely considered to be one of the best value folding cards around. But I would imagine you didn't/don't have one to hand.
You can say i went from 90 fps to 60, but all in all is still manages, and i get better results in everything else, like satability etc. When my 480 stops being able to get consistent 60+ frames i will upgrade. I have the money from the 5970 just for that
Surely it's no surprise.
From what i read about fermi before it's release, the entire fermi architecture is built for GPGPU apps, actual functionality as a graphics card was essentially tacked on at the last possible moment. If there's one area i'd expect fermi to be strong in, it's applications like folding, because that was what it was primarily designed to do.
Not sure why people are acting surprised by the so-so at games, awesome at GPGPU computing angle...
Myself, i'll stick to ATI: designed for games with a little GPGPU tacked on, it's the same usage priority for things i run.
and regarding the beaten-dead-horse driver stability issue: the main reason i switched to ATI was because of Nvidias horribly unstable drivers in my personal experience
And just do the math. The GTX480 for instance. 13892 PPD/301 watts=46.15 or 46.2. A 100 watt light bulb uses 100 watts per hour. A system that uses 301 watts actually uses 7224 watts per day or 1.92 points per watt by the method used in this review.
/nurses his 4870's wounded pride
Would like to see the OCed results.
Any chance this will be updated soon for 500's??
Interest here too as I am going to be buying a single card initially with a second to follow, mainly for folding wondering what would be the best option for £150 for each card?
Anyone know how good the 500 series Nvidia cards are for folding? I am going to buy one card now and another later.
Is there a 500 equivalent of the preferred GTX460?
Separate names with a comma.